From c38fbcc6a66cc88e9032c84084ac3d8bc1de8c61 Mon Sep 17 00:00:00 2001 From: "A. Unique TensorFlower" Date: Thu, 26 May 2016 05:20:40 -0800 Subject: Update generated Python Op docs. Change: 123313010 --- .../shard0/tf.AggregationMethod.md | 10 + .../functions_and_classes/shard0/tf.Assert.md | 29 + .../functions_and_classes/shard0/tf.FIFOQueue.md | 41 + .../shard0/tf.FixedLenFeature.md | 31 + .../shard0/tf.FixedLengthRecordReader.md | 148 ---- .../functions_and_classes/shard0/tf.NoGradient.md | 23 - .../shard0/tf.RegisterGradient.md | 36 - .../shard0/tf.SparseTensorValue.md | 22 - .../functions_and_classes/shard0/tf.TensorShape.md | 316 -------- .../shard0/tf.accumulate_n.md | 37 - .../shard0/tf.assert_equal.md | 35 - .../functions_and_classes/shard0/tf.assert_less.md | 35 - .../shard0/tf.assert_negative.md | 33 - .../shard0/tf.assert_non_negative.md | 34 - .../shard0/tf.assert_rank_at_least.md | 37 - .../functions_and_classes/shard0/tf.assert_type.md | 15 + .../shard0/tf.batch_ifft2d.md | 18 + .../shard0/tf.batch_matrix_band_part.md | 60 ++ .../shard0/tf.batch_to_space.md | 37 - .../functions_and_classes/shard0/tf.bytes.md | 4 + .../functions_and_classes/shard0/tf.concat.md | 45 ++ .../shard0/tf.contrib.copy_graph.get_copied_op.md | 18 + .../tf.contrib.distributions.BaseDistribution.md | 195 ----- .../shard0/tf.contrib.distributions.Chi2.md | 260 +++++++ .../shard0/tf.contrib.layers.l2_regularizer.md | 22 - .../shard0/tf.contrib.layers.optimize_loss.md | 43 ++ .../shard0/tf.contrib.layers.xavier_initializer.md | 29 + .../shard0/tf.contrib.learn.BaseEstimator.md | 189 ----- .../tf.contrib.learn.TensorFlowClassifier.md | 279 ------- .../tf.contrib.learn.TensorFlowRNNClassifier.md | 312 ++++++++ .../shard0/tf.contrib.learn.evaluate.md | 44 ++ .../shard0/tf.contrib.learn.extract_dask_data.md | 4 + .../shard0/tf.contrib.learn.extract_pandas_data.md | 4 + .../shard0/tf.contrib.learn.read_batch_examples.md | 39 - .../shard0/tf.contrib.learn.run_feeds.md | 28 - .../shard0/tf.contrib.learn.train.md | 62 ++ .../tf.contrib.metrics.streaming_accuracy.md | 51 ++ ...ontrib.metrics.streaming_mean_absolute_error.md | 48 ++ ...trib.metrics.streaming_sparse_precision_at_k.md | 60 -- ...contrib.metrics.streaming_sparse_recall_at_k.md | 59 -- .../shard0/tf.decode_json_example.md | 25 - .../shard0/tf.delete_session_tensor.md | 19 - .../python/functions_and_classes/shard0/tf.div.md | 15 + .../shard0/tf.dynamic_stitch.md | 53 -- .../python/functions_and_classes/shard0/tf.erfc.md | 14 + .../shard0/tf.errors.AlreadyExistsError.md | 14 + .../shard0/tf.errors.InvalidArgumentError.md | 17 + .../shard0/tf.errors.UnavailableError.md | 11 + .../functions_and_classes/shard0/tf.greater.md | 15 - .../functions_and_classes/shard0/tf.group.md | 25 + .../shard0/tf.image.encode_jpeg.md | 51 -- .../shard0/tf.image.extract_glimpse.md | 31 - .../shard0/tf.image.random_brightness.md | 25 - .../shard0/tf.image.random_contrast.md | 26 + .../shard0/tf.image.resize_area.md | 24 + .../shard0/tf.image.resize_images.md | 43 -- .../shard0/tf.image.transpose_image.md | 20 - .../shard0/tf.initialize_local_variables.md | 10 + .../python/functions_and_classes/shard0/tf.inv.md | 16 + .../shard0/tf.matching_files.md | 17 - .../shard0/tf.matrix_solve_ls.md | 47 ++ .../shard0/tf.merge_all_summaries.md | 16 + .../shard0/tf.nn.batch_normalization.md | 46 ++ .../functions_and_classes/shard0/tf.nn.nce_loss.md | 53 -- .../functions_and_classes/shard0/tf.nn.relu.md | 14 - .../tf.nn.softmax_cross_entropy_with_logits.md | 36 - .../functions_and_classes/shard0/tf.nn.softplus.md | 14 - .../functions_and_classes/shard0/tf.nn.top_k.md | 31 - .../shard0/tf.nn.uniform_candidate_sampler.md | 49 -- .../functions_and_classes/shard0/tf.no_op.md | 13 + .../shard0/tf.parse_single_example.md | 35 - .../functions_and_classes/shard0/tf.polygamma.md | 22 + .../python/functions_and_classes/shard0/tf.pow.md | 24 - .../functions_and_classes/shard0/tf.reduce_any.md | 35 - .../functions_and_classes/shard0/tf.reduce_prod.md | 25 - .../shard0/tf.report_uninitialized_variables.md | 19 + .../shard0/tf.reset_default_graph.md | 10 - .../shard0/tf.reverse_sequence.md | 76 ++ .../functions_and_classes/shard0/tf.round.md | 21 - .../shard0/tf.self_adjoint_eig.md | 21 + .../shard0/tf.set_random_seed.md | 98 --- .../functions_and_classes/shard0/tf.shape.md | 23 + .../python/functions_and_classes/shard0/tf.sign.md | 18 - .../shard0/tf.space_to_batch.md | 44 -- .../shard0/tf.sparse_merge.md | 73 -- .../shard0/tf.sparse_segment_sqrt_n_grad.md | 24 - .../shard0/tf.sparse_to_dense.md | 45 -- .../shard0/tf.squared_difference.md | 15 - .../functions_and_classes/shard0/tf.squeeze.md | 38 - .../shard0/tf.string_to_hash_bucket_fast.md | 23 + .../shard0/tf.string_to_number.md | 20 - .../python/functions_and_classes/shard0/tf.tanh.md | 16 - .../functions_and_classes/shard0/tf.test.main.md | 4 + .../functions_and_classes/shard0/tf.to_double.md | 19 + .../shard0/tf.train.AdadeltaOptimizer.md | 23 + .../shard0/tf.train.GradientDescentOptimizer.md | 18 - .../shard0/tf.train.LooperThread.loop.md | 22 + .../shard0/tf.train.QueueRunner.md | 161 ++++ .../shard0/tf.train.RMSPropOptimizer.md | 23 + .../shard0/tf.train.batch_join.md | 79 -- .../tf.train.generate_checkpoint_state_proto.md | 20 + .../shard0/tf.train.limit_epochs.md | 21 + .../shard0/tf.truncated_normal.md | 27 + .../shard0/tf.unique_with_counts.md | 36 - .../shard0/tf.variable_axis_size_partitioner.md | 37 + .../shard0/tf.variable_scope.md | 82 -- .../functions_and_classes/shard0/tf.zeros_like.md | 28 - .../functions_and_classes/shard1/tf.Assert.md | 29 - .../shard1/tf.QueueBase.from_list.md | 21 - .../functions_and_classes/shard1/tf.QueueBase.md | 268 +++++++ .../shard1/tf.RandomShuffleQueue.md | 54 ++ .../shard1/tf.TextLineReader.md | 148 ---- .../shard1/tf.VariableScope.md | 105 +++ .../shard1/tf.assert_negative.md | 33 + .../functions_and_classes/shard1/tf.assert_rank.md | 36 - .../shard1/tf.batch_matrix_determinant.md | 19 - .../shard1/tf.batch_matrix_diag.md | 42 + .../shard1/tf.batch_matrix_solve.md | 27 - .../shard1/tf.batch_matrix_solve_ls.md | 56 ++ .../shard1/tf.batch_self_adjoint_eig.md | 22 + .../shard1/tf.cholesky_solve.md | 35 - .../functions_and_classes/shard1/tf.complex_abs.md | 26 - .../functions_and_classes/shard1/tf.constant.md | 50 -- .../shard1/tf.constant_initializer.md | 20 + .../shard1/tf.contrib.distributions.Exponential.md | 260 ------- .../tf.contrib.distributions.MultivariateNormal.md | 218 ------ .../tf.contrib.layers.summarize_collection.md | 4 - .../shard1/tf.contrib.layers.summarize_tensors.md | 4 + ....contrib.layers.variance_scaling_initializer.md | 47 -- .../shard1/tf.contrib.learn.Estimator.md | 215 ------ .../tf.contrib.learn.TensorFlowClassifier.md | 279 +++++++ .../shard1/tf.contrib.learn.TensorFlowEstimator.md | 295 +++++++ .../tf.contrib.learn.TensorFlowLinearRegressor.md | 279 ------- .../shard1/tf.contrib.learn.extract_pandas_data.md | 4 - .../tf.contrib.learn.extract_pandas_matrix.md | 4 + .../shard1/tf.contrib.learn.read_batch_features.md | 43 ++ .../tf.contrib.metrics.streaming_accuracy.md | 51 -- ...ontrib.metrics.streaming_mean_relative_error.md | 49 ++ ...contrib.metrics.streaming_mean_squared_error.md | 48 ++ .../tf.contrib.metrics.streaming_recall_at_k.md | 52 ++ .../shard1/tf.contrib.util.constant_value.md | 31 - .../functions_and_classes/shard1/tf.cross.md | 22 + .../shard1/tf.decode_json_example.md | 25 + .../shard1/tf.delete_session_tensor.md | 19 + .../python/functions_and_classes/shard1/tf.div.md | 15 - .../shard1/tf.errors.CancelledError.md | 17 + .../shard1/tf.errors.DataLossError.md | 13 - .../shard1/tf.errors.FailedPreconditionError.md | 13 - .../python/functions_and_classes/shard1/tf.fft.md | 14 + .../shard1/tf.get_collection.md | 25 + .../functions_and_classes/shard1/tf.group.md | 25 - .../shard1/tf.histogram_summary.md | 25 - .../functions_and_classes/shard1/tf.igamma.md | 29 - .../shard1/tf.image.convert_image_dtype.md | 32 - .../shard1/tf.image.decode_jpeg.md | 41 + .../shard1/tf.image.flip_left_right.md | 23 - .../shard1/tf.image.flip_up_down.md | 23 + .../shard1/tf.image.hsv_to_rgb.md | 21 - .../shard1/tf.image.random_brightness.md | 25 + .../shard1/tf.image.random_saturation.md | 27 - .../shard1/tf.image.resize_area.md | 24 - .../shard1/tf.image.resize_bicubic.md | 24 + .../shard1/tf.import_graph_def.md | 49 -- .../shard1/tf.initialize_variables.md | 24 - .../functions_and_classes/shard1/tf.is_inf.md | 14 - .../functions_and_classes/shard1/tf.linspace.md | 28 + .../functions_and_classes/shard1/tf.listdiff.md | 40 + .../shard1/tf.local_variables.md | 8 - .../python/functions_and_classes/shard1/tf.log.md | 16 + .../shard1/tf.matrix_solve.md | 21 + .../functions_and_classes/shard1/tf.maximum.md | 15 + .../shard1/tf.merge_summary.md | 26 - .../python/functions_and_classes/shard1/tf.mul.md | 15 + .../shard1/tf.nn.atrous_conv2d.md | 107 +++ .../shard1/tf.nn.conv2d_transpose.md | 34 + .../functions_and_classes/shard1/tf.nn.dropout.md | 38 + .../functions_and_classes/shard1/tf.nn.l2_loss.md | 19 - .../shard1/tf.nn.max_pool_with_argmax.md | 30 - .../tf.nn.sigmoid_cross_entropy_with_logits.md | 48 ++ .../tf.nn.softmax_cross_entropy_with_logits.md | 36 + ....nn.sparse_softmax_cross_entropy_with_logits.md | 38 + .../tf.nn.weighted_cross_entropy_with_logits.md | 52 ++ .../shard1/tf.no_regularizer.md | 4 - .../python/functions_and_classes/shard1/tf.pack.md | 23 + .../functions_and_classes/shard1/tf.placeholder.md | 34 - .../python/functions_and_classes/shard1/tf.pow.md | 24 + .../functions_and_classes/shard1/tf.py_func.md | 31 + .../shard1/tf.random_uniform.md | 41 - .../python/functions_and_classes/shard1/tf.rank.md | 28 - .../python/functions_and_classes/shard1/tf.real.md | 28 - .../functions_and_classes/shard1/tf.reduce_prod.md | 25 + .../shard1/tf.report_uninitialized_variables.md | 19 - .../functions_and_classes/shard1/tf.reverse.md | 61 ++ .../python/functions_and_classes/shard1/tf.scan.md | 44 ++ .../shard1/tf.segment_prod.md | 31 + .../shard1/tf.self_adjoint_eig.md | 21 - .../functions_and_classes/shard1/tf.shape.md | 23 - .../functions_and_classes/shard1/tf.shape_n.md | 16 + .../shard1/tf.squared_difference.md | 15 + .../shard1/tf.string_to_hash_bucket.md | 21 - .../python/functions_and_classes/shard1/tf.sub.md | 15 + .../shard1/tf.test.get_temp_dir.md | 10 - .../shard1/tf.test.is_built_with_cuda.md | 4 + .../python/functions_and_classes/shard1/tf.tile.md | 22 - .../functions_and_classes/shard1/tf.to_int64.md | 19 + .../shard1/tf.train.AdadeltaOptimizer.md | 23 - .../shard1/tf.train.AdagradOptimizer.md | 26 + .../shard1/tf.train.GradientDescentOptimizer.md | 18 + .../shard1/tf.train.LooperThread.loop.md | 22 - .../shard1/tf.train.global_step.md | 27 + .../shard1/tf.train.match_filenames_once.md | 14 - .../shard1/tf.train.replica_device_setter.md | 50 -- .../shard1/tf.train.slice_input_producer.md | 35 - .../shard1/tf.train.start_queue_runners.md | 24 + .../shard1/tf.train.string_input_producer.md | 32 - .../shard1/tf.truncated_normal.md | 27 - .../shard1/tf.unique_with_counts.md | 36 + .../shard1/tf.verify_tensor_all_finite.md | 15 + .../shard2/tf.DeviceSpec.from_string.md | 18 - .../functions_and_classes/shard2/tf.DeviceSpec.md | 146 ---- .../shard2/tf.IdentityReader.md | 148 ---- .../shard2/tf.InteractiveSession.md | 68 ++ .../shard2/tf.QueueBase.from_list.md | 21 + .../shard2/tf.RandomShuffleQueue.md | 54 -- .../shard2/tf.RegisterShape.md | 27 - .../functions_and_classes/shard2/tf.Session.md | 236 ------ .../shard2/tf.TFRecordReader.md | 145 ++++ .../shard2/tf.Variable.from_proto.md | 4 + .../shard2/tf.WholeFileReader.md | 148 ---- .../python/functions_and_classes/shard2/tf.abs.md | 22 - .../shard2/tf.all_variables.md | 12 + .../shard2/tf.assert_less_equal.md | 35 + .../shard2/tf.batch_cholesky_solve.md | 35 - .../functions_and_classes/shard2/tf.batch_fft.md | 18 + .../shard2/tf.batch_ifft3d.md | 18 - .../shard2/tf.batch_matrix_inverse.md | 28 - .../shard2/tf.batch_self_adjoint_eig.md | 22 - .../functions_and_classes/shard2/tf.bitcast.md | 25 - .../shard2/tf.boolean_mask.md | 43 -- .../python/functions_and_classes/shard2/tf.ceil.md | 14 - .../shard2/tf.check_numerics.md | 18 - .../shard2/tf.cholesky_solve.md | 35 + .../functions_and_classes/shard2/tf.complex.md | 30 - .../functions_and_classes/shard2/tf.complex_abs.md | 26 + .../python/functions_and_classes/shard2/tf.cond.md | 54 ++ .../python/functions_and_classes/shard2/tf.conj.md | 28 + .../functions_and_classes/shard2/tf.constant.md | 50 ++ .../tf.contrib.distributions.MultivariateNormal.md | 218 ++++++ .../shard2/tf.contrib.layers.fully_connected.md | 46 -- .../tf.contrib.layers.summarize_activations.md | 4 + .../tf.contrib.layers.xavier_initializer_conv2d.md | 29 + .../shard2/tf.contrib.learn.RunConfig.md | 47 -- .../tf.contrib.learn.TensorFlowDNNClassifier.md | 302 ++++++++ .../tf.contrib.learn.TensorFlowLinearClassifier.md | 279 +++++++ .../tf.contrib.learn.extract_pandas_matrix.md | 4 - .../shard2/tf.contrib.learn.infer.md | 4 + .../tf.contrib.learn.read_batch_record_features.md | 33 + .../shard2/tf.contrib.metrics.set_difference.md | 24 + ...ontrib.metrics.streaming_mean_relative_error.md | 49 -- .../tf.contrib.metrics.streaming_recall_at_k.md | 52 -- .../functions_and_classes/shard2/tf.decode_csv.md | 26 - .../shard2/tf.depth_to_space.md | 95 +++ .../python/functions_and_classes/shard2/tf.erf.md | 14 + .../shard2/tf.errors.InternalError.md | 12 + .../shard2/tf.errors.UnauthenticatedError.md | 11 + .../shard2/tf.errors.UnknownError.md | 15 + .../python/functions_and_classes/shard2/tf.exp.md | 14 + .../functions_and_classes/shard2/tf.fft2d.md | 14 - .../functions_and_classes/shard2/tf.floor.md | 14 - .../functions_and_classes/shard2/tf.floordiv.md | 32 - .../functions_and_classes/shard2/tf.gather_nd.md | 30 - .../shard2/tf.get_default_session.md | 16 + .../shard2/tf.get_variable.md | 72 -- .../python/functions_and_classes/shard2/tf.ifft.md | 15 - .../shard2/tf.image.decode_png.md | 30 - .../shard2/tf.image.per_image_whitening.md | 29 - .../shard2/tf.image.random_flip_up_down.md | 24 + .../shard2/tf.import_graph_def.md | 49 ++ .../shard2/tf.initialize_all_variables.md | 10 + .../shard2/tf.invert_permutation.md | 30 - .../functions_and_classes/shard2/tf.is_nan.md | 14 + .../shard2/tf.local_variables.md | 8 + .../python/functions_and_classes/shard2/tf.log.md | 16 - .../functions_and_classes/shard2/tf.matmul.md | 46 ++ .../shard2/tf.matrix_inverse.md | 27 + .../shard2/tf.matrix_solve.md | 21 - .../shard2/tf.matrix_triangular_solve.md | 35 + .../python/functions_and_classes/shard2/tf.mod.md | 15 - .../functions_and_classes/shard2/tf.nn.dropout.md | 38 - .../functions_and_classes/shard2/tf.nn.in_top_k.md | 33 + .../shard2/tf.nn.l2_normalize.md | 24 - .../tf.nn.learned_unigram_candidate_sampler.md | 53 ++ .../shard2/tf.nn.normalize_moments.md | 20 - .../functions_and_classes/shard2/tf.nn.relu6.md | 15 + .../python/functions_and_classes/shard2/tf.ones.md | 24 - .../python/functions_and_classes/shard2/tf.pack.md | 23 - .../shard2/tf.random_shuffle.md | 29 - .../shard2/tf.random_uniform_initializer.md | 25 - .../functions_and_classes/shard2/tf.reduce_min.md | 25 - .../functions_and_classes/shard2/tf.scalar_mul.md | 23 - .../shard2/tf.scalar_summary.md | 21 - .../functions_and_classes/shard2/tf.segment_max.md | 30 - .../shard2/tf.segment_mean.md | 32 - .../functions_and_classes/shard2/tf.sigmoid.md | 18 - .../shard2/tf.space_to_depth.md | 87 +++ .../shard2/tf.sparse_segment_mean.md | 27 - .../shard2/tf.stop_gradient.md | 34 + .../shard2/tf.test.assert_equal_graph_def.md | 20 - .../shard2/tf.test.compute_gradient_error.md | 36 + .../functions_and_classes/shard2/tf.to_bfloat16.md | 19 + .../functions_and_classes/shard2/tf.to_float.md | 19 + .../shard2/tf.train.LooperThread.md | 215 ------ .../shard2/tf.train.MomentumOptimizer.md | 18 + .../shard2/tf.train.Saver.from_proto.md | 4 + .../functions_and_classes/shard2/tf.train.Saver.md | 315 ++++++++ .../shard2/tf.train.SummaryWriter.md | 170 +++++ .../shard2/tf.train.export_meta_graph.md | 24 + .../shard2/tf.train.range_input_producer.md | 25 - .../shard2/tf.train.shuffle_batch.md | 74 -- .../shard2/tf.train.shuffle_batch_join.md | 68 -- .../shard2/tf.train.slice_input_producer.md | 35 + .../shard2/tf.train.write_graph.md | 21 - .../functions_and_classes/shard2/tf.truediv.md | 31 + .../shard2/tf.variable_op_scope.md | 56 ++ .../functions_and_classes/shard3/tf.Graph.md | 783 +++++++++++++++++++ .../shard3/tf.IndexedSlices.md | 93 +++ .../shard3/tf.SparseTensorValue.md | 22 + .../shard3/tf.WholeFileReader.md | 148 ++++ .../functions_and_classes/shard3/tf.add_n.md | 15 + .../shard3/tf.assert_rank_at_least.md | 37 + .../shard3/tf.assert_variables_initialized.md | 24 + .../shard3/tf.batch_cholesky.md | 20 + .../functions_and_classes/shard3/tf.batch_fft2d.md | 18 - .../functions_and_classes/shard3/tf.batch_fft3d.md | 18 + .../functions_and_classes/shard3/tf.batch_ifft.md | 18 - .../shard3/tf.batch_matrix_band_part.md | 60 -- .../shard3/tf.batch_matrix_triangular_solve.md | 39 + .../functions_and_classes/shard3/tf.bytes.md | 4 - .../python/functions_and_classes/shard3/tf.cast.md | 30 + .../functions_and_classes/shard3/tf.complex.md | 30 + .../python/functions_and_classes/shard3/tf.cond.md | 54 -- ...f.contrib.distributions.DiscreteDistribution.md | 139 ++++ .../shard3/tf.contrib.distributions.Uniform.md | 216 ++++++ ...ions.normal_conjugates_known_sigma_posterior.md | 48 -- .../shard3/tf.contrib.ffmpeg.decode_audio.md | 25 - .../tf.contrib.layers.apply_regularization.md | 27 + .../shard3/tf.contrib.layers.sum_regularizer.md | 14 + .../shard3/tf.contrib.layers.summarize_tensor.md | 18 + .../tf.contrib.layers.xavier_initializer_conv2d.md | 29 - .../shard3/tf.contrib.learn.ModeKeys.md | 7 - .../tf.contrib.learn.TensorFlowDNNRegressor.md | 302 -------- .../tf.contrib.learn.TensorFlowRNNRegressor.md | 312 -------- .../shard3/tf.contrib.learn.extract_dask_labels.md | 4 + .../shard3/tf.contrib.learn.read_batch_examples.md | 39 + .../tf.contrib.learn.read_batch_record_features.md | 33 - .../shard3/tf.contrib.metrics.accuracy.md | 23 + .../tf.contrib.metrics.auc_using_histogram.md | 38 - .../shard3/tf.contrib.metrics.set_union.md | 23 + .../shard3/tf.contrib.metrics.streaming_auc.md | 58 ++ .../shard3/tf.contrib.util.make_ndarray.md | 20 - .../tf.contrib.util.ops_used_by_graph_def.md | 15 - .../tf.contrib.util.stripped_op_list_for_graph.md | 23 + .../tf.convert_to_tensor_or_indexed_slices.md | 27 - .../shard3/tf.depth_to_space.md | 95 --- .../python/functions_and_classes/shard3/tf.erf.md | 14 - .../shard3/tf.errors.DeadlineExceededError.md | 11 - .../shard3/tf.errors.NotFoundError.md | 14 - .../python/functions_and_classes/shard3/tf.exp.md | 14 - .../functions_and_classes/shard3/tf.gather_nd.md | 30 + .../shard3/tf.get_collection_ref.md | 20 + .../functions_and_classes/shard3/tf.gradients.md | 48 -- .../functions_and_classes/shard3/tf.greater.md | 15 + .../functions_and_classes/shard3/tf.identity.md | 14 - .../functions_and_classes/shard3/tf.ifft3d.md | 15 - .../shard3/tf.image.adjust_contrast.md | 29 - .../shard3/tf.image.adjust_hue.md | 26 + .../shard3/tf.image.adjust_saturation.md | 25 - .../shard3/tf.image.extract_glimpse.md | 31 + .../shard3/tf.image.grayscale_to_rgb.md | 17 - .../shard3/tf.image.random_hue.md | 28 + .../shard3/tf.image_summary.md | 48 ++ .../shard3/tf.is_non_decreasing.md | 25 + .../shard3/tf.is_strictly_increasing.md | 26 + .../functions_and_classes/shard3/tf.less_equal.md | 15 - .../shard3/tf.make_template.md | 105 --- .../functions_and_classes/shard3/tf.multinomial.md | 28 - .../shard3/tf.nn.avg_pool3d.md | 24 + .../functions_and_classes/shard3/tf.nn.bias_add.md | 24 - .../shard3/tf.nn.depthwise_conv2d.md | 37 + .../shard3/tf.nn.embedding_lookup_sparse.md | 66 ++ .../tf.nn.fixed_unigram_candidate_sampler.md | 75 ++ .../shard3/tf.nn.l2_normalize.md | 24 + .../shard3/tf.nn.local_response_normalization.md | 34 - .../shard3/tf.nn.log_uniform_candidate_sampler.md | 56 -- .../shard3/tf.nn.normalize_moments.md | 20 + .../shard3/tf.nn.sampled_softmax_loss.md | 49 -- .../shard3/tf.nn.separable_conv2d.md | 40 + .../functions_and_classes/shard3/tf.ones_like.md | 28 - .../python/functions_and_classes/shard3/tf.pad.md | 57 -- .../shard3/tf.parse_single_example.md | 35 + .../shard3/tf.placeholder_with_default.md | 17 - .../functions_and_classes/shard3/tf.read_file.md | 14 - .../functions_and_classes/shard3/tf.reduce_mean.md | 35 + .../shard3/tf.scalar_summary.md | 21 + .../functions_and_classes/shard3/tf.scatter_add.md | 46 ++ .../functions_and_classes/shard3/tf.scatter_sub.md | 44 -- .../functions_and_classes/shard3/tf.segment_max.md | 30 + .../shard3/tf.segment_mean.md | 32 + .../functions_and_classes/shard3/tf.segment_sum.md | 30 - .../functions_and_classes/shard3/tf.sparse_add.md | 55 -- .../shard3/tf.sparse_concat.md | 100 --- .../shard3/tf.sparse_fill_empty_rows.md | 54 -- .../shard3/tf.sparse_merge.md | 73 ++ .../shard3/tf.sparse_placeholder.md | 43 -- .../shard3/tf.sparse_reset_shape.md | 60 -- .../shard3/tf.sparse_segment_mean.md | 27 + .../shard3/tf.sparse_softmax.md | 51 ++ .../functions_and_classes/shard3/tf.split.md | 29 + .../functions_and_classes/shard3/tf.square.md | 16 + .../shard3/tf.test.assert_equal_graph_def.md | 20 + .../functions_and_classes/shard3/tf.to_bfloat16.md | 19 - .../functions_and_classes/shard3/tf.to_int32.md | 19 - .../shard3/tf.train.Coordinator.md | 223 ++++++ .../functions_and_classes/shard3/tf.train.Saver.md | 315 -------- .../shard3/tf.train.Server.create_local_server.md | 19 + .../shard3/tf.train.exponential_decay.md | 54 ++ .../shard3/tf.train.get_checkpoint_state.md | 19 + .../shard3/tf.train.range_input_producer.md | 25 + .../shard3/tf.train.write_graph.md | 21 + .../functions_and_classes/shard3/tf.tuple.md | 36 + .../shard3/tf.unsorted_segment_sum.md | 38 - .../functions_and_classes/shard3/tf.while_loop.md | 60 -- .../shard3/tf.zeros_initializer.md | 4 + .../functions_and_classes/shard4/tf.DType.md | 206 ----- .../shard4/tf.IndexedSlices.md | 93 --- .../shard4/tf.VariableScope.md | 105 --- .../shard4/tf.add_check_numerics_ops.md | 13 + .../shard4/tf.assert_variables_initialized.md | 24 - .../functions_and_classes/shard4/tf.batch_fft3d.md | 18 - .../python/functions_and_classes/shard4/tf.cast.md | 30 - .../functions_and_classes/shard4/tf.cholesky.md | 22 + .../shard4/tf.contrib.distributions.Uniform.md | 216 ------ ...ons.normal_congugates_known_sigma_predictive.md | 55 ++ .../tf.contrib.layers.apply_regularization.md | 27 - .../shard4/tf.contrib.layers.l1_regularizer.md | 22 + .../shard4/tf.contrib.layers.sum_regularizer.md | 14 - .../tf.contrib.layers.summarize_activation.md | 16 + .../tf.contrib.layers.summarize_collection.md | 4 + .../shard4/tf.contrib.layers.summarize_tensors.md | 4 - .../shard4/tf.contrib.learn.TensorFlowEstimator.md | 295 ------- .../shard4/tf.contrib.learn.extract_dask_labels.md | 4 - .../shard4/tf.contrib.metrics.confusion_matrix.md | 45 ++ .../shard4/tf.contrib.metrics.set_union.md | 23 - ...ntrib.metrics.streaming_mean_cosine_distance.md | 48 ++ ...contrib.metrics.streaming_mean_squared_error.md | 48 -- .../shard4/tf.contrib.util.constant_value.md | 31 + .../tf.contrib.util.ops_used_by_graph_def.md | 15 + .../shard4/tf.convert_to_tensor.md | 47 ++ .../functions_and_classes/shard4/tf.device.md | 19 + .../shard4/tf.dynamic_partition.md | 50 -- .../shard4/tf.errors.CancelledError.md | 17 - .../shard4/tf.errors.DeadlineExceededError.md | 11 + .../shard4/tf.errors.OutOfRangeError.md | 15 - .../shard4/tf.errors.PermissionDeniedError.md | 14 - .../shard4/tf.errors.UnimplementedError.md | 15 - .../functions_and_classes/shard4/tf.fft3d.md | 14 + .../functions_and_classes/shard4/tf.gather.md | 35 - .../shard4/tf.get_collection_ref.md | 20 - .../functions_and_classes/shard4/tf.get_seed.md | 22 + .../shard4/tf.get_session_handle.md | 38 + .../shard4/tf.get_session_tensor.md | 22 - .../functions_and_classes/shard4/tf.global_norm.md | 27 - .../functions_and_classes/shard4/tf.identity.md | 14 + .../functions_and_classes/shard4/tf.ifft2d.md | 15 - .../functions_and_classes/shard4/tf.ifft3d.md | 15 + .../functions_and_classes/shard4/tf.igammac.md | 29 + .../python/functions_and_classes/shard4/tf.imag.md | 27 + .../shard4/tf.image.adjust_saturation.md | 25 + .../shard4/tf.image.convert_image_dtype.md | 32 + .../shard4/tf.image.flip_left_right.md | 23 + .../shard4/tf.image.flip_up_down.md | 23 - .../shard4/tf.image.pad_to_bounding_box.md | 30 + .../tf.image.resize_image_with_crop_or_pad.md | 30 + .../shard4/tf.image.resize_nearest_neighbor.md | 22 + .../shard4/tf.image.rgb_to_grayscale.md | 19 + .../shard4/tf.image.rgb_to_hsv.md | 23 + .../shard4/tf.is_non_decreasing.md | 25 - .../functions_and_classes/shard4/tf.lgamma.md | 14 - .../functions_and_classes/shard4/tf.linspace.md | 28 - .../functions_and_classes/shard4/tf.logical_and.md | 15 + .../functions_and_classes/shard4/tf.logical_or.md | 15 - .../functions_and_classes/shard4/tf.map_fn.md | 42 - .../functions_and_classes/shard4/tf.maximum.md | 15 - .../shard4/tf.merge_summary.md | 26 + .../python/functions_and_classes/shard4/tf.mul.md | 15 - .../python/functions_and_classes/shard4/tf.neg.md | 16 - .../shard4/tf.nn.atrous_conv2d.md | 107 --- .../functions_and_classes/shard4/tf.nn.conv3d.md | 29 + .../shard4/tf.nn.depthwise_conv2d.md | 37 - .../functions_and_classes/shard4/tf.nn.max_pool.md | 21 + .../shard4/tf.nn.max_pool_with_argmax.md | 30 + .../functions_and_classes/shard4/tf.nn.moments.md | 30 - .../shard4/tf.nn.sampled_softmax_loss.md | 49 ++ .../functions_and_classes/shard4/tf.nn.softmax.md | 19 + .../shard4/tf.nn.sufficient_statistics.md | 27 + .../tf.nn.weighted_cross_entropy_with_logits.md | 52 -- .../functions_and_classes/shard4/tf.not_equal.md | 15 - .../python/functions_and_classes/shard4/tf.pad.md | 57 ++ .../shard4/tf.parse_example.md | 153 ---- .../functions_and_classes/shard4/tf.placeholder.md | 34 + .../shard4/tf.python_io.tf_record_iterator.md | 18 + .../functions_and_classes/shard4/tf.read_file.md | 14 + .../python/functions_and_classes/shard4/tf.real.md | 28 + .../functions_and_classes/shard4/tf.reshape.md | 72 -- .../shard4/tf.scatter_update.md | 46 ++ .../shard4/tf.segment_prod.md | 31 - .../functions_and_classes/shard4/tf.select.md | 56 ++ .../python/functions_and_classes/shard4/tf.sin.md | 14 - .../functions_and_classes/shard4/tf.slice.md | 47 -- .../shard4/tf.sparse_concat.md | 100 +++ .../functions_and_classes/shard4/tf.sparse_mask.md | 39 + .../shard4/tf.sparse_to_indicator.md | 52 -- .../shard4/tf.string_to_hash_bucket_strong.md | 30 - .../python/functions_and_classes/shard4/tf.sub.md | 15 - .../shard4/tf.test.compute_gradient.md | 40 - .../functions_and_classes/shard4/tf.to_int32.md | 19 + .../functions_and_classes/shard4/tf.trace.md | 29 + .../shard4/tf.train.ClusterSpec.md | 86 +++ .../shard4/tf.train.SessionManager.md | 187 ----- .../shard4/tf.train.Supervisor.md | 845 +++++++++++++++++++++ .../shard4/tf.train.get_checkpoint_state.md | 19 - .../shard4/tf.train.input_producer.md | 38 - .../shard4/tf.train.start_queue_runners.md | 24 - .../shard4/tf.train.string_input_producer.md | 32 + .../shard4/tf.train.update_checkpoint_state.md | 24 - .../functions_and_classes/shard4/tf.transpose.md | 49 ++ .../shard4/tf.truncated_normal_initializer.md | 31 - .../shard4/tf.uniform_unit_scaling_initializer.md | 47 ++ .../functions_and_classes/shard4/tf.unpack.md | 32 + .../functions_and_classes/shard5/tf.DType.md | 206 +++++ .../functions_and_classes/shard5/tf.Dimension.md | 83 ++ .../shard5/tf.FixedLenSequenceFeature.md | 31 + .../shard5/tf.FixedLengthRecordReader.md | 148 ++++ .../functions_and_classes/shard5/tf.GraphKeys.md | 36 + .../functions_and_classes/shard5/tf.Session.md | 236 ++++++ .../shard5/tf.SparseTensor.md | 143 ++++ .../shard5/tf.TFRecordReader.md | 145 ---- .../shard5/tf.Variable.from_proto.md | 4 - .../functions_and_classes/shard5/tf.Variable.md | 460 ----------- .../shard5/tf.accumulate_n.md | 37 + .../shard5/tf.assert_equal.md | 35 + .../shard5/tf.assert_non_positive.md | 34 + .../shard5/tf.assert_positive.md | 33 + .../shard5/tf.batch_matmul.md | 41 + .../shard5/tf.clip_by_norm.md | 29 + .../functions_and_classes/shard5/tf.concat.md | 45 -- ...tf.contrib.copy_graph.copy_variable_to_graph.md | 20 + .../shard5/tf.contrib.distributions.Chi2.md | 260 ------- .../shard5/tf.contrib.distributions.Gamma.md | 284 ------- .../shard5/tf.contrib.layers.convolution2d.md | 43 ++ .../shard5/tf.contrib.layers.l2_regularizer.md | 22 + .../shard5/tf.contrib.layers.xavier_initializer.md | 29 - .../shard5/tf.contrib.learn.RunConfig.md | 47 ++ .../shard5/tf.contrib.learn.evaluate.md | 44 -- .../shard5/tf.contrib.learn.infer.md | 4 - .../shard5/tf.contrib.learn.run_feeds.md | 28 + .../shard5/tf.contrib.metrics.streaming_mean.md | 44 -- ...ntrib.metrics.streaming_mean_cosine_distance.md | 48 -- .../functions_and_classes/shard5/tf.decode_csv.md | 26 + .../functions_and_classes/shard5/tf.digamma.md | 16 + .../shard5/tf.dynamic_partition.md | 50 ++ .../functions_and_classes/shard5/tf.equal.md | 15 + .../shard5/tf.errors.AbortedError.md | 15 + .../shard5/tf.errors.PermissionDeniedError.md | 14 + .../shard5/tf.errors.ResourceExhaustedError.md | 12 + .../shard5/tf.errors.UnauthenticatedError.md | 11 - .../functions_and_classes/shard5/tf.expand_dims.md | 50 ++ .../python/functions_and_classes/shard5/tf.fill.md | 26 + .../shard5/tf.get_default_graph.md | 17 - .../functions_and_classes/shard5/tf.get_seed.md | 22 - .../shard5/tf.get_session_tensor.md | 22 + .../functions_and_classes/shard5/tf.global_norm.md | 27 + .../functions_and_classes/shard5/tf.ifft2d.md | 15 + .../shard5/tf.image.adjust_brightness.md | 25 + .../shard5/tf.image.encode_jpeg.md | 51 ++ .../shard5/tf.image.encode_png.md | 28 + .../shard5/tf.image.pad_to_bounding_box.md | 30 - .../shard5/tf.image.random_flip_up_down.md | 24 - .../shard5/tf.image.resize_bilinear.md | 24 + .../shard5/tf.image.rgb_to_grayscale.md | 19 - .../shard5/tf.initialize_local_variables.md | 10 - .../python/functions_and_classes/shard5/tf.inv.md | 16 - .../functions_and_classes/shard5/tf.is_finite.md | 14 + .../shard5/tf.is_variable_initialized.md | 14 + .../functions_and_classes/shard5/tf.lbeta.md | 31 + .../python/functions_and_classes/shard5/tf.less.md | 15 + .../shard5/tf.load_file_system_library.md | 23 + .../functions_and_classes/shard5/tf.logical_not.md | 14 - .../functions_and_classes/shard5/tf.logical_or.md | 15 + .../functions_and_classes/shard5/tf.logical_xor.md | 4 - .../shard5/tf.matrix_inverse.md | 27 - .../shard5/tf.matrix_triangular_solve.md | 35 - .../python/functions_and_classes/shard5/tf.mod.md | 15 + .../python/functions_and_classes/shard5/tf.neg.md | 16 + .../shard5/tf.nn.compute_accidental_hits.md | 45 ++ .../functions_and_classes/shard5/tf.nn.conv2d.md | 49 ++ .../functions_and_classes/shard5/tf.nn.conv3d.md | 29 - .../shard5/tf.nn.depthwise_conv2d_native.md | 37 - .../functions_and_classes/shard5/tf.nn.elu.md | 17 - .../functions_and_classes/shard5/tf.nn.in_top_k.md | 33 - .../functions_and_classes/shard5/tf.nn.top_k.md | 31 + .../shard5/tf.nn.uniform_candidate_sampler.md | 49 ++ .../functions_and_classes/shard5/tf.one_hot.md | 129 ++++ .../functions_and_classes/shard5/tf.op_scope.md | 36 - .../shard5/tf.parse_example.md | 153 ++++ .../shard5/tf.random_shuffle.md | 29 + .../functions_and_classes/shard5/tf.reduce_join.md | 49 -- .../functions_and_classes/shard5/tf.reduce_max.md | 25 + .../functions_and_classes/shard5/tf.reduce_min.md | 25 + .../tf.register_tensor_conversion_function.md | 42 - .../functions_and_classes/shard5/tf.reshape.md | 72 ++ .../functions_and_classes/shard5/tf.round.md | 21 + .../shard5/tf.saturate_cast.md | 19 - .../shard5/tf.scatter_update.md | 46 -- .../functions_and_classes/shard5/tf.segment_min.md | 31 + .../functions_and_classes/shard5/tf.select.md | 56 -- .../functions_and_classes/shard5/tf.sigmoid.md | 18 + .../shard5/tf.space_to_batch.md | 44 ++ .../shard5/tf.sparse_retain.md | 33 + .../shard5/tf.sparse_tensor_dense_matmul.md | 163 ++++ .../shard5/tf.sparse_tensor_to_dense.md | 43 -- .../shard5/tf.sparse_to_dense.md | 45 ++ .../shard5/tf.string_to_hash_bucket_fast.md | 23 - .../shard5/tf.string_to_number.md | 20 + .../shard5/tf.test.compute_gradient_error.md | 36 - .../shard5/tf.train.SessionManager.md | 187 +++++ .../shard5/tf.train.add_queue_runner.md | 18 - .../shard5/tf.train.batch_join.md | 79 ++ .../shard5/tf.train.import_meta_graph.md | 65 ++ .../shard5/tf.train.shuffle_batch_join.md | 68 ++ .../functions_and_classes/shard5/tf.truediv.md | 31 - .../shard5/tf.truncated_normal_initializer.md | 31 + .../functions_and_classes/shard5/tf.unique.md | 33 + .../functions_and_classes/shard5/tf.unpack.md | 32 - .../shard5/tf.variable_op_scope.md | 56 -- .../shard5/tf.variable_scope.md | 82 ++ .../functions_and_classes/shard5/tf.zeros_like.md | 28 + .../python/functions_and_classes/shard5/tf.zeta.md | 21 - .../functions_and_classes/shard6/tf.Operation.md | 225 ++++++ .../functions_and_classes/shard6/tf.QueueBase.md | 268 ------- .../functions_and_classes/shard6/tf.ReaderBase.md | 156 ++++ .../functions_and_classes/shard6/tf.Tensor.md | 228 ------ .../functions_and_classes/shard6/tf.Variable.md | 460 +++++++++++ .../python/functions_and_classes/shard6/tf.add.md | 17 + .../shard6/tf.add_to_collection.md | 14 - .../functions_and_classes/shard6/tf.as_dtype.md | 21 - .../shard6/tf.assert_integer.md | 30 - .../shard6/tf.assert_non_positive.md | 34 - .../functions_and_classes/shard6/tf.assert_rank.md | 36 + .../shard6/tf.batch_cholesky.md | 20 - .../functions_and_classes/shard6/tf.batch_fft2d.md | 18 + .../shard6/tf.batch_matmul.md | 41 - .../shard6/tf.batch_matrix_determinant.md | 19 + .../python/functions_and_classes/shard6/tf.case.md | 75 ++ .../shard6/tf.clip_by_value.md | 21 - .../tf.contrib.copy_graph.copy_op_to_graph.md | 29 + ...tf.contrib.copy_graph.copy_variable_to_graph.md | 20 - ...f.contrib.distributions.DiscreteDistribution.md | 139 ---- .../shard6/tf.contrib.distributions.StudentT.md | 245 ------ .../shard6/tf.contrib.ffmpeg.encode_audio.md | 19 + .../shard6/tf.contrib.layers.convolution2d.md | 43 -- .../shard6/tf.contrib.layers.summarize_tensor.md | 18 - ....contrib.layers.variance_scaling_initializer.md | 47 ++ .../shard6/tf.contrib.learn.Estimator.md | 215 ++++++ .../tf.contrib.learn.NanLossDuringTrainingError.md | 1 + .../tf.contrib.learn.TensorFlowDNNRegressor.md | 302 ++++++++ .../tf.contrib.learn.TensorFlowLinearRegressor.md | 279 +++++++ .../tf.contrib.learn.TensorFlowRNNRegressor.md | 312 ++++++++ .../shard6/tf.contrib.learn.TensorFlowRegressor.md | 279 ------- .../shard6/tf.contrib.learn.read_batch_features.md | 43 -- .../shard6/tf.contrib.learn.run_n.md | 19 + .../tf.contrib.metrics.auc_using_histogram.md | 38 + .../shard6/tf.contrib.metrics.set_size.md | 22 + .../shard6/tf.contrib.metrics.streaming_auc.md | 58 -- ...tf.contrib.metrics.streaming_percentage_less.md | 47 ++ ...ib.metrics.streaming_root_mean_squared_error.md | 48 -- .../shard6/tf.contrib.util.make_ndarray.md | 20 + .../shard6/tf.contrib.util.make_tensor_proto.md | 44 ++ .../tf.contrib.util.stripped_op_list_for_graph.md | 23 - .../shard6/tf.control_dependencies.md | 20 - .../functions_and_classes/shard6/tf.diag_part.md | 34 + .../functions_and_classes/shard6/tf.digamma.md | 16 - .../shard6/tf.edit_distance.md | 65 -- .../functions_and_classes/shard6/tf.equal.md | 15 - .../shard6/tf.errors.DataLossError.md | 13 + .../shard6/tf.errors.NotFoundError.md | 14 + .../shard6/tf.errors.ResourceExhaustedError.md | 12 - .../functions_and_classes/shard6/tf.expand_dims.md | 50 -- .../python/functions_and_classes/shard6/tf.fft.md | 14 - .../python/functions_and_classes/shard6/tf.fill.md | 26 - .../functions_and_classes/shard6/tf.foldl.md | 44 ++ .../functions_and_classes/shard6/tf.foldr.md | 44 ++ .../shard6/tf.get_collection.md | 25 - .../shard6/tf.get_variable_scope.md | 4 + .../shard6/tf.greater_equal.md | 15 + .../shard6/tf.histogram_fixed_width.md | 38 - .../functions_and_classes/shard6/tf.igamma.md | 29 + .../shard6/tf.image.adjust_brightness.md | 25 - .../shard6/tf.image.draw_bounding_boxes.md | 32 - .../shard6/tf.image.grayscale_to_rgb.md | 17 + .../shard6/tf.image.random_flip_left_right.md | 24 - .../shard6/tf.image.random_hue.md | 28 - .../shard6/tf.image.random_saturation.md | 27 + .../functions_and_classes/shard6/tf.is_finite.md | 14 - .../functions_and_classes/shard6/tf.is_inf.md | 14 + .../shard6/tf.is_numeric_tensor.md | 4 - .../shard6/tf.is_strictly_increasing.md | 26 - .../shard6/tf.is_variable_initialized.md | 14 - .../functions_and_classes/shard6/tf.lbeta.md | 31 - .../functions_and_classes/shard6/tf.listdiff.md | 40 - .../shard6/tf.load_op_library.md | 24 - .../shard6/tf.matrix_determinant.md | 16 - .../shard6/tf.moving_average_variables.md | 13 + .../shard6/tf.nn.avg_pool3d.md | 24 - .../functions_and_classes/shard6/tf.nn.conv2d.md | 49 -- .../shard6/tf.nn.conv2d_transpose.md | 34 - .../shard6/tf.nn.embedding_lookup.md | 50 -- .../shard6/tf.nn.embedding_lookup_sparse.md | 66 -- .../functions_and_classes/shard6/tf.nn.l2_loss.md | 19 + .../shard6/tf.nn.log_softmax.md | 19 - .../shard6/tf.nn.log_uniform_candidate_sampler.md | 56 ++ .../shard6/tf.nn.max_pool3d.md | 23 - .../shard6/tf.nn.zero_fraction.md | 21 - .../functions_and_classes/shard6/tf.ones_like.md | 28 + .../shard6/tf.placeholder_with_default.md | 17 + .../shard6/tf.python_io.TFRecordWriter.md | 41 - .../functions_and_classes/shard6/tf.random_crop.md | 25 - .../shard6/tf.random_uniform.md | 41 + .../functions_and_classes/shard6/tf.range.md | 37 + .../python/functions_and_classes/shard6/tf.rank.md | 28 + .../functions_and_classes/shard6/tf.reduce_join.md | 49 ++ .../functions_and_classes/shard6/tf.reduce_max.md | 25 - .../functions_and_classes/shard6/tf.reduce_sum.md | 37 + .../functions_and_classes/shard6/tf.reverse.md | 61 -- .../shard6/tf.saturate_cast.md | 19 + .../functions_and_classes/shard6/tf.scatter_add.md | 46 -- .../python/functions_and_classes/shard6/tf.size.md | 24 - .../functions_and_classes/shard6/tf.sparse_add.md | 55 ++ .../shard6/tf.sparse_reset_shape.md | 60 ++ .../shard6/tf.sparse_retain.md | 33 - .../shard6/tf.sparse_segment_sqrt_n.md | 26 - .../shard6/tf.sparse_softmax.md | 51 -- .../shard6/tf.sparse_tensor_dense_matmul.md | 163 ---- .../shard6/tf.sparse_tensor_to_dense.md | 43 ++ .../functions_and_classes/shard6/tf.split.md | 29 - .../python/functions_and_classes/shard6/tf.sqrt.md | 16 + .../functions_and_classes/shard6/tf.square.md | 16 - .../shard6/tf.train.ExponentialMovingAverage.md | 229 ++++++ .../shard6/tf.train.FtrlOptimizer.md | 32 + .../shard6/tf.train.Optimizer.md | 255 +++++++ .../shard6/tf.train.QueueRunner.from_proto.md | 4 + .../shard6/tf.train.Server.md | 113 --- .../functions_and_classes/shard6/tf.train.batch.md | 68 -- .../shard6/tf.train.replica_device_setter.md | 50 ++ .../functions_and_classes/shard6/tf.tuple.md | 36 - .../shard6/tf.unsorted_segment_sum.md | 38 + .../functions_and_classes/shard6/tf.where.md | 46 ++ .../functions_and_classes/shard6/tf.while_loop.md | 60 ++ .../functions_and_classes/shard6/tf.zeros.md | 24 + .../shard6/tf.zeros_initializer.md | 4 - .../shard7/tf.AggregationMethod.md | 10 - .../shard7/tf.DeviceSpec.from_string.md | 18 + .../functions_and_classes/shard7/tf.FIFOQueue.md | 41 - .../shard7/tf.IdentityReader.md | 148 ++++ .../functions_and_classes/shard7/tf.NoGradient.md | 23 + .../shard7/tf.RegisterGradient.md | 36 + .../functions_and_classes/shard7/tf.Tensor.md | 228 ++++++ .../shard7/tf.add_check_numerics_ops.md | 13 - .../shard7/tf.add_to_collection.md | 14 + .../functions_and_classes/shard7/tf.argmax.md | 17 - .../functions_and_classes/shard7/tf.argmin.md | 17 - .../shard7/tf.assert_integer.md | 30 + .../shard7/tf.assert_less_equal.md | 35 - .../shard7/tf.assert_non_negative.md | 34 + .../shard7/tf.assert_proper_iterable.md | 18 - .../functions_and_classes/shard7/tf.assert_type.md | 15 - .../shard7/tf.audio_summary.md | 35 - .../shard7/tf.batch_ifft2d.md | 18 - .../shard7/tf.batch_matrix_inverse.md | 28 + .../shard7/tf.batch_to_space.md | 37 + .../python/functions_and_classes/shard7/tf.ceil.md | 14 + .../shard7/tf.check_numerics.md | 18 + .../shard7/tf.clip_by_average_norm.md | 29 + .../shard7/tf.clip_by_global_norm.md | 50 -- .../shard7/tf.clip_by_value.md | 21 + .../python/functions_and_classes/shard7/tf.conj.md | 28 - .../tf.contrib.copy_graph.copy_op_to_graph.md | 29 - .../tf.contrib.distributions.BaseDistribution.md | 195 +++++ ...contrib.distributions.ContinuousDistribution.md | 153 ++++ .../shard7/tf.contrib.distributions.StudentT.md | 245 ++++++ ...ons.normal_congugates_known_sigma_predictive.md | 55 -- .../shard7/tf.contrib.layers.fully_connected.md | 46 ++ .../shard7/tf.contrib.layers.l1_regularizer.md | 22 - .../tf.contrib.layers.summarize_activation.md | 16 - .../tf.contrib.layers.summarize_activations.md | 4 - .../tf.contrib.learn.NanLossDuringTrainingError.md | 1 - .../tf.contrib.learn.TensorFlowDNNClassifier.md | 302 -------- .../tf.contrib.learn.TensorFlowLinearClassifier.md | 279 ------- .../shard7/tf.contrib.learn.run_n.md | 19 - .../shard7/tf.contrib.learn.train.md | 62 -- .../shard7/tf.contrib.metrics.confusion_matrix.md | 45 -- .../shard7/tf.contrib.metrics.set_intersection.md | 23 - ...tf.contrib.metrics.streaming_percentage_less.md | 47 -- .../tf.contrib.metrics.streaming_precision.md | 50 -- ...trib.metrics.streaming_sparse_precision_at_k.md | 60 ++ ...contrib.metrics.streaming_sparse_recall_at_k.md | 59 ++ .../shard7/tf.contrib.util.make_tensor_proto.md | 44 -- .../shard7/tf.control_dependencies.md | 20 + .../shard7/tf.convert_to_tensor.md | 47 -- .../functions_and_classes/shard7/tf.count_up_to.md | 23 - .../functions_and_classes/shard7/tf.device.md | 19 - .../python/functions_and_classes/shard7/tf.diag.md | 33 + .../functions_and_classes/shard7/tf.diag_part.md | 34 - .../shard7/tf.errors.AlreadyExistsError.md | 14 - .../shard7/tf.errors.InternalError.md | 12 - .../shard7/tf.errors.OutOfRangeError.md | 15 + .../shard7/tf.errors.UnavailableError.md | 11 - .../functions_and_classes/shard7/tf.fft3d.md | 14 - .../functions_and_classes/shard7/tf.floor.md | 14 + .../functions_and_classes/shard7/tf.floordiv.md | 32 + .../functions_and_classes/shard7/tf.foldl.md | 44 -- .../functions_and_classes/shard7/tf.gather.md | 35 + .../shard7/tf.get_default_session.md | 16 - .../shard7/tf.get_session_handle.md | 38 - .../shard7/tf.get_variable_scope.md | 4 - .../shard7/tf.greater_equal.md | 15 - .../shard7/tf.histogram_fixed_width.md | 38 + .../python/functions_and_classes/shard7/tf.ifft.md | 15 + .../python/functions_and_classes/shard7/tf.imag.md | 27 - .../shard7/tf.image.central_crop.md | 30 + .../shard7/tf.image.crop_to_bounding_box.md | 30 - .../shard7/tf.image.decode_png.md | 30 + .../shard7/tf.image.draw_bounding_boxes.md | 32 + .../shard7/tf.image.per_image_whitening.md | 29 + .../shard7/tf.image.random_flip_left_right.md | 24 + .../tf.image.resize_image_with_crop_or_pad.md | 30 - .../shard7/tf.image.transpose_image.md | 20 + .../shard7/tf.initialize_all_variables.md | 10 - .../shard7/tf.invert_permutation.md | 30 + .../functions_and_classes/shard7/tf.is_nan.md | 14 - .../shard7/tf.is_numeric_tensor.md | 4 + .../shard7/tf.load_op_library.md | 24 + .../functions_and_classes/shard7/tf.logical_and.md | 15 - .../functions_and_classes/shard7/tf.map_fn.md | 42 + .../functions_and_classes/shard7/tf.matmul.md | 46 -- .../shard7/tf.matrix_determinant.md | 16 + .../shard7/tf.matrix_solve_ls.md | 47 -- .../functions_and_classes/shard7/tf.minimum.md | 15 + .../functions_and_classes/shard7/tf.nn.avg_pool.md | 25 - .../shard7/tf.nn.batch_normalization.md | 46 -- .../shard7/tf.nn.embedding_lookup.md | 50 ++ .../tf.nn.learned_unigram_candidate_sampler.md | 53 -- .../shard7/tf.nn.log_softmax.md | 19 + .../shard7/tf.nn.max_pool3d.md | 23 + .../functions_and_classes/shard7/tf.nn.moments.md | 30 + .../functions_and_classes/shard7/tf.nn.relu.md | 14 + .../functions_and_classes/shard7/tf.nn.softmax.md | 19 - .../functions_and_classes/shard7/tf.nn.softsign.md | 14 - .../shard7/tf.nn.sufficient_statistics.md | 27 - .../shard7/tf.nn.zero_fraction.md | 21 + .../python/functions_and_classes/shard7/tf.ones.md | 24 + .../shard7/tf.python_io.tf_record_iterator.md | 18 - .../functions_and_classes/shard7/tf.random_crop.md | 25 + .../shard7/tf.random_uniform_initializer.md | 25 + .../functions_and_classes/shard7/tf.range.md | 37 - .../functions_and_classes/shard7/tf.reduce_all.md | 35 - .../shard7/tf.reverse_sequence.md | 76 -- .../python/functions_and_classes/shard7/tf.sign.md | 18 + .../python/functions_and_classes/shard7/tf.sin.md | 14 + .../shard7/tf.space_to_depth.md | 87 --- .../functions_and_classes/shard7/tf.sparse_mask.md | 39 - .../shard7/tf.string_to_hash_bucket_strong.md | 30 + .../python/functions_and_classes/shard7/tf.tanh.md | 16 + .../functions_and_classes/shard7/tf.test.main.md | 4 - .../functions_and_classes/shard7/tf.to_double.md | 19 - .../functions_and_classes/shard7/tf.to_float.md | 19 - .../functions_and_classes/shard7/tf.trace.md | 29 - .../shard7/tf.train.FtrlOptimizer.md | 32 - .../shard7/tf.train.LooperThread.md | 215 ++++++ .../shard7/tf.train.MomentumOptimizer.md | 18 - .../shard7/tf.train.RMSPropOptimizer.md | 23 - .../shard7/tf.train.Saver.from_proto.md | 4 - .../shard7/tf.train.Server.md | 113 +++ .../shard7/tf.train.SummaryWriter.md | 170 ----- .../shard7/tf.train.Supervisor.md | 845 --------------------- .../tf.train.generate_checkpoint_state_proto.md | 20 - .../shard7/tf.train.input_producer.md | 38 + .../shard7/tf.train.limit_epochs.md | 21 - .../shard7/tf.train.shuffle_batch.md | 74 ++ .../shard7/tf.train.summary_iterator.md | 42 + .../shard7/tf.trainable_variables.md | 13 + .../functions_and_classes/shard7/tf.transpose.md | 49 -- .../shard7/tf.variable_axis_size_partitioner.md | 37 - .../functions_and_classes/shard8/tf.Dimension.md | 83 -- .../shard8/tf.FixedLenSequenceFeature.md | 31 - .../functions_and_classes/shard8/tf.Graph.md | 783 ------------------- .../functions_and_classes/shard8/tf.GraphKeys.md | 36 - .../functions_and_classes/shard8/tf.OpError.md | 62 ++ .../functions_and_classes/shard8/tf.Print.md | 23 + .../shard8/tf.SparseTensor.md | 143 ---- .../shard8/tf.TextLineReader.md | 148 ++++ .../shard8/tf.VarLenFeature.md | 11 - .../functions_and_classes/shard8/tf.add_n.md | 15 - .../functions_and_classes/shard8/tf.argmax.md | 17 + .../functions_and_classes/shard8/tf.argmin.md | 17 + .../shard8/tf.assert_positive.md | 33 - .../shard8/tf.assert_proper_iterable.md | 18 + .../shard8/tf.audio_summary.md | 35 + .../functions_and_classes/shard8/tf.batch_ifft.md | 18 + .../shard8/tf.batch_matrix_diag.md | 42 - .../shard8/tf.batch_matrix_diag_part.md | 46 ++ .../shard8/tf.batch_matrix_solve.md | 27 + .../shard8/tf.batch_matrix_solve_ls.md | 56 -- .../shard8/tf.batch_matrix_triangular_solve.md | 39 - .../shard8/tf.clip_by_average_norm.md | 29 - .../shard8/tf.clip_by_global_norm.md | 50 ++ .../shard8/tf.clip_by_norm.md | 29 - .../shard8/tf.constant_initializer.md | 20 - ...contrib.distributions.ContinuousDistribution.md | 153 ---- ...f.contrib.distributions.DirichletMultinomial.md | 185 ----- .../shard8/tf.contrib.distributions.Exponential.md | 260 +++++++ .../shard8/tf.contrib.distributions.Gamma.md | 284 +++++++ .../shard8/tf.contrib.distributions.Normal.md | 209 ----- ...ions.normal_conjugates_known_sigma_posterior.md | 48 ++ .../shard8/tf.contrib.ffmpeg.decode_audio.md | 25 + .../shard8/tf.contrib.learn.ModeKeys.md | 7 + .../tf.contrib.learn.extract_pandas_labels.md | 4 - .../shard8/tf.contrib.metrics.accuracy.md | 23 - .../shard8/tf.contrib.metrics.set_intersection.md | 23 + .../shard8/tf.contrib.metrics.streaming_mean.md | 44 ++ .../tf.contrib.metrics.streaming_precision.md | 50 ++ .../shard8/tf.contrib.metrics.streaming_recall.md | 50 ++ .../tf.convert_to_tensor_or_indexed_slices.md | 27 + .../python/functions_and_classes/shard8/tf.cos.md | 14 + .../functions_and_classes/shard8/tf.count_up_to.md | 23 + .../functions_and_classes/shard8/tf.cross.md | 22 - .../functions_and_classes/shard8/tf.decode_raw.md | 23 - .../python/functions_and_classes/shard8/tf.diag.md | 33 - .../shard8/tf.errors.AbortedError.md | 15 - .../shard8/tf.errors.FailedPreconditionError.md | 13 + .../shard8/tf.get_default_graph.md | 17 + .../functions_and_classes/shard8/tf.gradients.md | 48 ++ .../shard8/tf.histogram_summary.md | 25 + .../shard8/tf.image.adjust_contrast.md | 29 + .../shard8/tf.image.adjust_hue.md | 26 - .../shard8/tf.image.central_crop.md | 30 - .../shard8/tf.image.crop_to_bounding_box.md | 30 + .../shard8/tf.image.decode_jpeg.md | 41 - .../shard8/tf.image.encode_png.md | 28 - .../shard8/tf.image.hsv_to_rgb.md | 21 + .../shard8/tf.image.resize_bicubic.md | 24 - .../shard8/tf.image.resize_bilinear.md | 24 - .../tf.image.sample_distorted_bounding_box.md | 85 +++ .../shard8/tf.image_summary.md | 48 -- .../shard8/tf.initialize_variables.md | 24 + .../python/functions_and_classes/shard8/tf.less.md | 15 - .../functions_and_classes/shard8/tf.less_equal.md | 15 + .../shard8/tf.load_file_system_library.md | 23 - .../functions_and_classes/shard8/tf.logical_not.md | 14 + .../functions_and_classes/shard8/tf.logical_xor.md | 4 + .../shard8/tf.make_template.md | 105 +++ .../functions_and_classes/shard8/tf.minimum.md | 15 - .../functions_and_classes/shard8/tf.multinomial.md | 28 + .../functions_and_classes/shard8/tf.name_scope.md | 18 + .../functions_and_classes/shard8/tf.nn.avg_pool.md | 25 + .../functions_and_classes/shard8/tf.nn.bias_add.md | 24 + .../shard8/tf.nn.compute_accidental_hits.md | 45 -- .../shard8/tf.nn.depthwise_conv2d_native.md | 37 + .../functions_and_classes/shard8/tf.nn.elu.md | 17 + .../tf.nn.fixed_unigram_candidate_sampler.md | 75 -- .../shard8/tf.nn.local_response_normalization.md | 34 + .../shard8/tf.nn.separable_conv2d.md | 40 - .../tf.nn.sigmoid_cross_entropy_with_logits.md | 48 -- .../functions_and_classes/shard8/tf.nn.softsign.md | 14 + ....nn.sparse_softmax_cross_entropy_with_logits.md | 38 - .../shard8/tf.no_regularizer.md | 4 + .../functions_and_classes/shard8/tf.one_hot.md | 129 ---- .../shard8/tf.ones_initializer.md | 4 + .../functions_and_classes/shard8/tf.op_scope.md | 36 + .../functions_and_classes/shard8/tf.py_func.md | 31 - .../shard8/tf.random_normal.md | 23 + .../shard8/tf.random_normal_initializer.md | 25 - .../functions_and_classes/shard8/tf.reduce_all.md | 35 + .../functions_and_classes/shard8/tf.reduce_mean.md | 35 - .../tf.register_tensor_conversion_function.md | 42 + .../functions_and_classes/shard8/tf.rsqrt.md | 16 - .../python/functions_and_classes/shard8/tf.scan.md | 44 -- .../functions_and_classes/shard8/tf.scatter_sub.md | 44 ++ .../functions_and_classes/shard8/tf.segment_min.md | 31 - .../functions_and_classes/shard8/tf.segment_sum.md | 30 + .../functions_and_classes/shard8/tf.shape_n.md | 16 - .../shard8/tf.sparse_fill_empty_rows.md | 54 ++ .../shard8/tf.sparse_placeholder.md | 43 ++ .../shard8/tf.sparse_reorder.md | 41 + .../shard8/tf.sparse_segment_sum.md | 50 ++ .../shard8/tf.sparse_split.md | 40 - .../shard8/tf.string_to_hash_bucket.md | 21 + .../shard8/tf.test.get_temp_dir.md | 10 + .../shard8/tf.test.is_built_with_cuda.md | 4 - .../python/functions_and_classes/shard8/tf.tile.md | 22 + .../functions_and_classes/shard8/tf.to_int64.md | 19 - .../shard8/tf.train.AdagradOptimizer.md | 26 - .../shard8/tf.train.AdamOptimizer.md | 49 -- .../shard8/tf.train.Coordinator.md | 223 ------ .../shard8/tf.train.Server.create_local_server.md | 19 - .../shard8/tf.train.add_queue_runner.md | 18 + .../shard8/tf.train.exponential_decay.md | 54 -- .../shard8/tf.train.global_step.md | 27 - .../shard8/tf.train.import_meta_graph.md | 65 -- .../shard8/tf.train.latest_checkpoint.md | 16 - .../shard8/tf.train.match_filenames_once.md | 14 + .../shard8/tf.train.summary_iterator.md | 42 - .../shard8/tf.trainable_variables.md | 13 - .../functions_and_classes/shard8/tf.unique.md | 33 - .../shard8/tf.verify_tensor_all_finite.md | 15 - .../python/functions_and_classes/shard8/tf.zeta.md | 21 + .../functions_and_classes/shard9/tf.DeviceSpec.md | 146 ++++ .../shard9/tf.FixedLenFeature.md | 31 - .../shard9/tf.InteractiveSession.md | 68 -- .../functions_and_classes/shard9/tf.OpError.md | 62 -- .../functions_and_classes/shard9/tf.Operation.md | 225 ------ .../functions_and_classes/shard9/tf.Print.md | 23 - .../functions_and_classes/shard9/tf.ReaderBase.md | 156 ---- .../shard9/tf.RegisterShape.md | 27 + .../functions_and_classes/shard9/tf.TensorShape.md | 316 ++++++++ .../shard9/tf.VarLenFeature.md | 11 + .../python/functions_and_classes/shard9/tf.abs.md | 22 + .../python/functions_and_classes/shard9/tf.add.md | 17 - .../shard9/tf.all_variables.md | 12 - .../functions_and_classes/shard9/tf.as_dtype.md | 21 + .../functions_and_classes/shard9/tf.assert_less.md | 35 + .../shard9/tf.batch_cholesky_solve.md | 35 + .../functions_and_classes/shard9/tf.batch_fft.md | 18 - .../shard9/tf.batch_ifft3d.md | 18 + .../shard9/tf.batch_matrix_diag_part.md | 46 -- .../functions_and_classes/shard9/tf.bitcast.md | 25 + .../shard9/tf.boolean_mask.md | 43 ++ .../python/functions_and_classes/shard9/tf.case.md | 75 -- .../functions_and_classes/shard9/tf.cholesky.md | 22 - .../shard9/tf.contrib.copy_graph.get_copied_op.md | 18 - ...f.contrib.distributions.DirichletMultinomial.md | 185 +++++ .../shard9/tf.contrib.distributions.Normal.md | 209 +++++ .../shard9/tf.contrib.ffmpeg.encode_audio.md | 19 - .../shard9/tf.contrib.layers.optimize_loss.md | 43 -- .../shard9/tf.contrib.learn.BaseEstimator.md | 189 +++++ .../tf.contrib.learn.TensorFlowRNNClassifier.md | 312 -------- .../shard9/tf.contrib.learn.TensorFlowRegressor.md | 279 +++++++ .../shard9/tf.contrib.learn.extract_dask_data.md | 4 - .../tf.contrib.learn.extract_pandas_labels.md | 4 + .../shard9/tf.contrib.metrics.set_difference.md | 24 - .../shard9/tf.contrib.metrics.set_size.md | 22 - ...ontrib.metrics.streaming_mean_absolute_error.md | 48 -- .../shard9/tf.contrib.metrics.streaming_recall.md | 50 -- ...ib.metrics.streaming_root_mean_squared_error.md | 48 ++ .../python/functions_and_classes/shard9/tf.cos.md | 14 - .../functions_and_classes/shard9/tf.decode_raw.md | 23 + .../shard9/tf.dynamic_stitch.md | 53 ++ .../shard9/tf.edit_distance.md | 65 ++ .../python/functions_and_classes/shard9/tf.erfc.md | 14 - .../shard9/tf.errors.InvalidArgumentError.md | 17 - .../shard9/tf.errors.UnimplementedError.md | 15 + .../shard9/tf.errors.UnknownError.md | 15 - .../functions_and_classes/shard9/tf.fft2d.md | 14 + .../functions_and_classes/shard9/tf.foldr.md | 44 -- .../shard9/tf.get_variable.md | 72 ++ .../functions_and_classes/shard9/tf.igammac.md | 29 - .../shard9/tf.image.random_contrast.md | 26 - .../shard9/tf.image.resize_images.md | 43 ++ .../shard9/tf.image.resize_nearest_neighbor.md | 22 - .../shard9/tf.image.rgb_to_hsv.md | 23 - .../tf.image.sample_distorted_bounding_box.md | 85 --- .../functions_and_classes/shard9/tf.lgamma.md | 14 + .../shard9/tf.matching_files.md | 17 + .../shard9/tf.merge_all_summaries.md | 16 - .../shard9/tf.moving_average_variables.md | 13 - .../functions_and_classes/shard9/tf.name_scope.md | 18 - .../functions_and_classes/shard9/tf.nn.max_pool.md | 21 - .../functions_and_classes/shard9/tf.nn.nce_loss.md | 53 ++ .../functions_and_classes/shard9/tf.nn.relu6.md | 15 - .../functions_and_classes/shard9/tf.nn.softplus.md | 14 + .../functions_and_classes/shard9/tf.no_op.md | 13 - .../functions_and_classes/shard9/tf.not_equal.md | 15 + .../shard9/tf.ones_initializer.md | 4 - .../functions_and_classes/shard9/tf.polygamma.md | 22 - .../shard9/tf.python_io.TFRecordWriter.md | 41 + .../shard9/tf.random_normal.md | 23 - .../shard9/tf.random_normal_initializer.md | 25 + .../functions_and_classes/shard9/tf.reduce_any.md | 35 + .../functions_and_classes/shard9/tf.reduce_sum.md | 37 - .../shard9/tf.reset_default_graph.md | 10 + .../functions_and_classes/shard9/tf.rsqrt.md | 16 + .../functions_and_classes/shard9/tf.scalar_mul.md | 23 + .../shard9/tf.set_random_seed.md | 98 +++ .../python/functions_and_classes/shard9/tf.size.md | 24 + .../functions_and_classes/shard9/tf.slice.md | 47 ++ .../shard9/tf.sparse_reorder.md | 41 - .../shard9/tf.sparse_segment_sqrt_n.md | 26 + .../shard9/tf.sparse_segment_sqrt_n_grad.md | 24 + .../shard9/tf.sparse_segment_sum.md | 50 -- .../shard9/tf.sparse_split.md | 40 + .../shard9/tf.sparse_to_indicator.md | 52 ++ .../python/functions_and_classes/shard9/tf.sqrt.md | 16 - .../functions_and_classes/shard9/tf.squeeze.md | 38 + .../shard9/tf.stop_gradient.md | 34 - .../shard9/tf.test.compute_gradient.md | 40 + .../shard9/tf.train.AdamOptimizer.md | 49 ++ .../shard9/tf.train.ClusterSpec.md | 86 --- .../shard9/tf.train.ExponentialMovingAverage.md | 229 ------ .../shard9/tf.train.Optimizer.md | 255 ------- .../shard9/tf.train.QueueRunner.from_proto.md | 4 - .../shard9/tf.train.QueueRunner.md | 161 ---- .../functions_and_classes/shard9/tf.train.batch.md | 68 ++ .../shard9/tf.train.export_meta_graph.md | 24 - .../shard9/tf.train.latest_checkpoint.md | 16 + .../shard9/tf.train.update_checkpoint_state.md | 24 + .../shard9/tf.uniform_unit_scaling_initializer.md | 47 -- .../functions_and_classes/shard9/tf.where.md | 46 -- .../functions_and_classes/shard9/tf.zeros.md | 24 - 1128 files changed, 28071 insertions(+), 28071 deletions(-) create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.AggregationMethod.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.Assert.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.FIFOQueue.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.FixedLenFeature.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.FixedLengthRecordReader.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.NoGradient.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.RegisterGradient.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.SparseTensorValue.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.TensorShape.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.accumulate_n.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.assert_equal.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.assert_less.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.assert_negative.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.assert_non_negative.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.assert_rank_at_least.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.assert_type.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.batch_ifft2d.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.batch_matrix_band_part.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.batch_to_space.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.bytes.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.concat.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.copy_graph.get_copied_op.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.BaseDistribution.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.Chi2.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.layers.l2_regularizer.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.layers.optimize_loss.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.layers.xavier_initializer.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.BaseEstimator.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.TensorFlowClassifier.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.TensorFlowRNNClassifier.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.evaluate.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.extract_dask_data.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.extract_pandas_data.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.read_batch_examples.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.run_feeds.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.train.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.metrics.streaming_accuracy.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.metrics.streaming_mean_absolute_error.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.metrics.streaming_sparse_precision_at_k.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.metrics.streaming_sparse_recall_at_k.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.decode_json_example.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.delete_session_tensor.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.div.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.dynamic_stitch.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.erfc.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.errors.AlreadyExistsError.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.errors.InvalidArgumentError.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.errors.UnavailableError.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.greater.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.group.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.encode_jpeg.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.extract_glimpse.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.random_brightness.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.random_contrast.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.resize_area.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.resize_images.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.transpose_image.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.initialize_local_variables.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.inv.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.matching_files.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.matrix_solve_ls.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.merge_all_summaries.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.batch_normalization.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.nce_loss.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.relu.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.softmax_cross_entropy_with_logits.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.softplus.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.top_k.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.uniform_candidate_sampler.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.no_op.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.parse_single_example.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.polygamma.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.pow.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.reduce_any.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.reduce_prod.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.report_uninitialized_variables.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.reset_default_graph.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.reverse_sequence.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.round.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.self_adjoint_eig.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.set_random_seed.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.shape.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.sign.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.space_to_batch.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.sparse_merge.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.sparse_segment_sqrt_n_grad.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.sparse_to_dense.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.squared_difference.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.squeeze.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.string_to_hash_bucket_fast.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.string_to_number.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.tanh.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.test.main.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.to_double.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.AdadeltaOptimizer.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.GradientDescentOptimizer.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.LooperThread.loop.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.QueueRunner.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.RMSPropOptimizer.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.batch_join.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.generate_checkpoint_state_proto.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.limit_epochs.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.truncated_normal.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.unique_with_counts.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.variable_axis_size_partitioner.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.variable_scope.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.zeros_like.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.Assert.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.QueueBase.from_list.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.QueueBase.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.RandomShuffleQueue.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.TextLineReader.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.VariableScope.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.assert_negative.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.assert_rank.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.batch_matrix_determinant.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.batch_matrix_diag.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.batch_matrix_solve.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.batch_matrix_solve_ls.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.batch_self_adjoint_eig.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.cholesky_solve.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.complex_abs.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.constant.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.constant_initializer.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.distributions.Exponential.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.distributions.MultivariateNormal.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.layers.summarize_collection.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.layers.summarize_tensors.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.layers.variance_scaling_initializer.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.Estimator.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.TensorFlowClassifier.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.TensorFlowEstimator.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.TensorFlowLinearRegressor.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.extract_pandas_data.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.extract_pandas_matrix.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.read_batch_features.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.metrics.streaming_accuracy.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.metrics.streaming_mean_relative_error.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.metrics.streaming_mean_squared_error.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.metrics.streaming_recall_at_k.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.util.constant_value.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.cross.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.decode_json_example.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.delete_session_tensor.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.div.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.errors.CancelledError.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.errors.DataLossError.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.errors.FailedPreconditionError.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.fft.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.get_collection.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.group.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.histogram_summary.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.igamma.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.convert_image_dtype.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.decode_jpeg.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.flip_left_right.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.flip_up_down.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.hsv_to_rgb.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.random_brightness.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.random_saturation.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.resize_area.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.resize_bicubic.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.import_graph_def.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.initialize_variables.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.is_inf.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.linspace.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.listdiff.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.local_variables.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.log.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.matrix_solve.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.maximum.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.merge_summary.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.mul.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.atrous_conv2d.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.conv2d_transpose.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.dropout.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.l2_loss.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.max_pool_with_argmax.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.sigmoid_cross_entropy_with_logits.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.softmax_cross_entropy_with_logits.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.sparse_softmax_cross_entropy_with_logits.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.weighted_cross_entropy_with_logits.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.no_regularizer.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.pack.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.placeholder.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.pow.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.py_func.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.random_uniform.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.rank.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.real.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.reduce_prod.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.report_uninitialized_variables.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.reverse.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.scan.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.segment_prod.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.self_adjoint_eig.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.shape.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.shape_n.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.squared_difference.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.string_to_hash_bucket.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.sub.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.test.get_temp_dir.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.test.is_built_with_cuda.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.tile.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.to_int64.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.AdadeltaOptimizer.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.AdagradOptimizer.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.GradientDescentOptimizer.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.LooperThread.loop.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.global_step.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.match_filenames_once.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.replica_device_setter.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.slice_input_producer.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.start_queue_runners.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.string_input_producer.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.truncated_normal.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.unique_with_counts.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.verify_tensor_all_finite.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.DeviceSpec.from_string.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.DeviceSpec.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.IdentityReader.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.InteractiveSession.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.QueueBase.from_list.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.RandomShuffleQueue.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.RegisterShape.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.Session.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.TFRecordReader.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.Variable.from_proto.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.WholeFileReader.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.abs.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.all_variables.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.assert_less_equal.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.batch_cholesky_solve.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.batch_fft.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.batch_ifft3d.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.batch_matrix_inverse.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.batch_self_adjoint_eig.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.bitcast.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.boolean_mask.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.ceil.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.check_numerics.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.cholesky_solve.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.complex.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.complex_abs.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.cond.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.conj.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.constant.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.distributions.MultivariateNormal.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.fully_connected.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.summarize_activations.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.xavier_initializer_conv2d.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.RunConfig.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.TensorFlowDNNClassifier.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.TensorFlowLinearClassifier.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.extract_pandas_matrix.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.infer.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.read_batch_record_features.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.set_difference.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.streaming_mean_relative_error.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.streaming_recall_at_k.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.decode_csv.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.depth_to_space.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.erf.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.errors.InternalError.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.errors.UnauthenticatedError.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.errors.UnknownError.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.exp.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.fft2d.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.floor.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.floordiv.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.gather_nd.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.get_default_session.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.get_variable.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.ifft.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.image.decode_png.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.image.per_image_whitening.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.image.random_flip_up_down.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.import_graph_def.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.initialize_all_variables.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.invert_permutation.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.is_nan.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.local_variables.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.log.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.matmul.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.matrix_inverse.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.matrix_solve.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.matrix_triangular_solve.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.mod.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.dropout.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.in_top_k.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.l2_normalize.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.learned_unigram_candidate_sampler.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.normalize_moments.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.relu6.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.ones.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.pack.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.random_shuffle.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.random_uniform_initializer.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.reduce_min.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.scalar_mul.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.scalar_summary.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.segment_max.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.segment_mean.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.sigmoid.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.space_to_depth.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.sparse_segment_mean.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.stop_gradient.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.test.assert_equal_graph_def.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.test.compute_gradient_error.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.to_bfloat16.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.to_float.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.LooperThread.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.MomentumOptimizer.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.Saver.from_proto.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.Saver.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.SummaryWriter.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.export_meta_graph.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.range_input_producer.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.shuffle_batch.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.shuffle_batch_join.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.slice_input_producer.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.write_graph.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.truediv.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.variable_op_scope.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.Graph.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.IndexedSlices.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.SparseTensorValue.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.WholeFileReader.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.add_n.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.assert_rank_at_least.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.assert_variables_initialized.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_cholesky.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_fft2d.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_fft3d.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_ifft.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_matrix_band_part.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_matrix_triangular_solve.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.bytes.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.cast.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.complex.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.cond.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.DiscreteDistribution.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.Uniform.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.normal_conjugates_known_sigma_posterior.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.ffmpeg.decode_audio.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.apply_regularization.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.sum_regularizer.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.summarize_tensor.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.xavier_initializer_conv2d.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.ModeKeys.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.TensorFlowDNNRegressor.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.TensorFlowRNNRegressor.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.extract_dask_labels.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.read_batch_examples.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.read_batch_record_features.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.accuracy.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.auc_using_histogram.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.set_union.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.streaming_auc.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.util.make_ndarray.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.util.ops_used_by_graph_def.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.util.stripped_op_list_for_graph.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.convert_to_tensor_or_indexed_slices.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.depth_to_space.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.erf.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.errors.DeadlineExceededError.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.errors.NotFoundError.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.exp.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.gather_nd.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.get_collection_ref.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.gradients.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.greater.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.identity.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.ifft3d.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.adjust_contrast.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.adjust_hue.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.adjust_saturation.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.extract_glimpse.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.grayscale_to_rgb.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.random_hue.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image_summary.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.is_non_decreasing.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.is_strictly_increasing.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.less_equal.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.make_template.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.multinomial.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.avg_pool3d.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.bias_add.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.depthwise_conv2d.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.embedding_lookup_sparse.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.fixed_unigram_candidate_sampler.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.l2_normalize.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.local_response_normalization.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.log_uniform_candidate_sampler.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.normalize_moments.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.sampled_softmax_loss.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.separable_conv2d.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.ones_like.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.pad.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.parse_single_example.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.placeholder_with_default.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.read_file.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.reduce_mean.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.scalar_summary.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.scatter_add.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.scatter_sub.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.segment_max.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.segment_mean.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.segment_sum.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_add.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_concat.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_fill_empty_rows.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_merge.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_placeholder.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_reset_shape.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_segment_mean.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_softmax.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.split.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.square.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.test.assert_equal_graph_def.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.to_bfloat16.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.to_int32.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.Coordinator.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.Saver.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.Server.create_local_server.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.exponential_decay.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.get_checkpoint_state.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.range_input_producer.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.write_graph.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.tuple.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.unsorted_segment_sum.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.while_loop.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.zeros_initializer.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.DType.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.IndexedSlices.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.VariableScope.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.add_check_numerics_ops.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.assert_variables_initialized.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.batch_fft3d.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.cast.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.cholesky.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.distributions.Uniform.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.distributions.normal_congugates_known_sigma_predictive.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.apply_regularization.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.l1_regularizer.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.sum_regularizer.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.summarize_activation.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.summarize_collection.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.summarize_tensors.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.TensorFlowEstimator.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.extract_dask_labels.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.metrics.confusion_matrix.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.metrics.set_union.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.metrics.streaming_mean_cosine_distance.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.metrics.streaming_mean_squared_error.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.util.constant_value.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.util.ops_used_by_graph_def.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.convert_to_tensor.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.device.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.dynamic_partition.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.errors.CancelledError.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.errors.DeadlineExceededError.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.errors.OutOfRangeError.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.errors.PermissionDeniedError.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.errors.UnimplementedError.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.fft3d.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.gather.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.get_collection_ref.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.get_seed.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.get_session_handle.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.get_session_tensor.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.global_norm.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.identity.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.ifft2d.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.ifft3d.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.igammac.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.imag.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.adjust_saturation.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.convert_image_dtype.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.flip_left_right.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.flip_up_down.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.pad_to_bounding_box.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.resize_image_with_crop_or_pad.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.resize_nearest_neighbor.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.rgb_to_grayscale.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.rgb_to_hsv.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.is_non_decreasing.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.lgamma.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.linspace.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.logical_and.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.logical_or.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.map_fn.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.maximum.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.merge_summary.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.mul.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.neg.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.atrous_conv2d.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.conv3d.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.depthwise_conv2d.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.max_pool.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.max_pool_with_argmax.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.moments.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.sampled_softmax_loss.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.softmax.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.sufficient_statistics.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.weighted_cross_entropy_with_logits.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.not_equal.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.pad.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.parse_example.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.placeholder.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.python_io.tf_record_iterator.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.read_file.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.real.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.reshape.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.scatter_update.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.segment_prod.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.select.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.sin.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.slice.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.sparse_concat.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.sparse_mask.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.sparse_to_indicator.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.string_to_hash_bucket_strong.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.sub.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.test.compute_gradient.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.to_int32.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.trace.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.ClusterSpec.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.SessionManager.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.Supervisor.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.get_checkpoint_state.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.input_producer.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.start_queue_runners.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.string_input_producer.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.update_checkpoint_state.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.transpose.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.truncated_normal_initializer.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.uniform_unit_scaling_initializer.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.unpack.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.DType.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.Dimension.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.FixedLenSequenceFeature.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.FixedLengthRecordReader.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.GraphKeys.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.Session.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.SparseTensor.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.TFRecordReader.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.Variable.from_proto.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.Variable.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.accumulate_n.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.assert_equal.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.assert_non_positive.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.assert_positive.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.batch_matmul.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.clip_by_norm.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.concat.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.copy_graph.copy_variable_to_graph.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.distributions.Chi2.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.distributions.Gamma.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.convolution2d.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.l2_regularizer.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.xavier_initializer.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.learn.RunConfig.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.learn.evaluate.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.learn.infer.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.learn.run_feeds.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.metrics.streaming_mean.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.metrics.streaming_mean_cosine_distance.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.decode_csv.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.digamma.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.dynamic_partition.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.equal.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.errors.AbortedError.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.errors.PermissionDeniedError.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.errors.ResourceExhaustedError.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.errors.UnauthenticatedError.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.expand_dims.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.fill.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.get_default_graph.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.get_seed.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.get_session_tensor.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.global_norm.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.ifft2d.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.adjust_brightness.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.encode_jpeg.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.encode_png.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.pad_to_bounding_box.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.random_flip_up_down.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.resize_bilinear.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.rgb_to_grayscale.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.initialize_local_variables.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.inv.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.is_finite.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.is_variable_initialized.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.lbeta.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.less.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.load_file_system_library.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.logical_not.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.logical_or.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.logical_xor.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.matrix_inverse.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.matrix_triangular_solve.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.mod.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.neg.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.compute_accidental_hits.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.conv2d.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.conv3d.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.depthwise_conv2d_native.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.elu.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.in_top_k.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.top_k.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.uniform_candidate_sampler.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.one_hot.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.op_scope.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.parse_example.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.random_shuffle.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.reduce_join.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.reduce_max.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.reduce_min.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.register_tensor_conversion_function.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.reshape.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.round.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.saturate_cast.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.scatter_update.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.segment_min.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.select.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.sigmoid.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.space_to_batch.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.sparse_retain.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.sparse_tensor_dense_matmul.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.sparse_tensor_to_dense.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.sparse_to_dense.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.string_to_hash_bucket_fast.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.string_to_number.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.test.compute_gradient_error.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.SessionManager.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.add_queue_runner.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.batch_join.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.import_meta_graph.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.shuffle_batch_join.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.truediv.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.truncated_normal_initializer.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.unique.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.unpack.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.variable_op_scope.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.variable_scope.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.zeros_like.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.zeta.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.Operation.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.QueueBase.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.ReaderBase.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.Tensor.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.Variable.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.add.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.add_to_collection.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.as_dtype.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.assert_integer.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.assert_non_positive.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.assert_rank.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.batch_cholesky.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.batch_fft2d.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.batch_matmul.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.batch_matrix_determinant.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.case.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.clip_by_value.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.copy_graph.copy_op_to_graph.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.copy_graph.copy_variable_to_graph.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.distributions.DiscreteDistribution.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.distributions.StudentT.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.ffmpeg.encode_audio.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.layers.convolution2d.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.layers.summarize_tensor.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.layers.variance_scaling_initializer.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.Estimator.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.NanLossDuringTrainingError.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.TensorFlowDNNRegressor.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.TensorFlowLinearRegressor.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.TensorFlowRNNRegressor.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.TensorFlowRegressor.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.read_batch_features.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.run_n.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.auc_using_histogram.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.set_size.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.streaming_auc.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.streaming_percentage_less.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.streaming_root_mean_squared_error.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.util.make_ndarray.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.util.make_tensor_proto.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.util.stripped_op_list_for_graph.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.control_dependencies.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.diag_part.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.digamma.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.edit_distance.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.equal.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.errors.DataLossError.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.errors.NotFoundError.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.errors.ResourceExhaustedError.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.expand_dims.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.fft.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.fill.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.foldl.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.foldr.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.get_collection.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.get_variable_scope.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.greater_equal.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.histogram_fixed_width.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.igamma.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.image.adjust_brightness.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.image.draw_bounding_boxes.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.image.grayscale_to_rgb.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.image.random_flip_left_right.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.image.random_hue.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.image.random_saturation.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.is_finite.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.is_inf.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.is_numeric_tensor.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.is_strictly_increasing.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.is_variable_initialized.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.lbeta.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.listdiff.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.load_op_library.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.matrix_determinant.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.moving_average_variables.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.avg_pool3d.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.conv2d.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.conv2d_transpose.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.embedding_lookup.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.embedding_lookup_sparse.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.l2_loss.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.log_softmax.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.log_uniform_candidate_sampler.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.max_pool3d.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.zero_fraction.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.ones_like.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.placeholder_with_default.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.python_io.TFRecordWriter.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.random_crop.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.random_uniform.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.range.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.rank.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.reduce_join.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.reduce_max.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.reduce_sum.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.reverse.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.saturate_cast.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.scatter_add.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.size.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_add.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_reset_shape.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_retain.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_segment_sqrt_n.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_softmax.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_tensor_dense_matmul.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_tensor_to_dense.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.split.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sqrt.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.square.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.ExponentialMovingAverage.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.FtrlOptimizer.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.Optimizer.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.QueueRunner.from_proto.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.Server.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.batch.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.replica_device_setter.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.tuple.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.unsorted_segment_sum.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.where.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.while_loop.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.zeros.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.zeros_initializer.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.AggregationMethod.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.DeviceSpec.from_string.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.FIFOQueue.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.IdentityReader.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.NoGradient.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.RegisterGradient.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.Tensor.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.add_check_numerics_ops.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.add_to_collection.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.argmax.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.argmin.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.assert_integer.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.assert_less_equal.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.assert_non_negative.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.assert_proper_iterable.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.assert_type.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.audio_summary.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.batch_ifft2d.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.batch_matrix_inverse.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.batch_to_space.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.ceil.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.check_numerics.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.clip_by_average_norm.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.clip_by_global_norm.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.clip_by_value.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.conj.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.copy_graph.copy_op_to_graph.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.BaseDistribution.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.ContinuousDistribution.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.StudentT.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.normal_congugates_known_sigma_predictive.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.fully_connected.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.l1_regularizer.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.summarize_activation.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.summarize_activations.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.learn.NanLossDuringTrainingError.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.learn.TensorFlowDNNClassifier.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.learn.TensorFlowLinearClassifier.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.learn.run_n.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.learn.train.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.confusion_matrix.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.set_intersection.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.streaming_percentage_less.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.streaming_precision.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.streaming_sparse_precision_at_k.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.streaming_sparse_recall_at_k.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.util.make_tensor_proto.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.control_dependencies.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.convert_to_tensor.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.count_up_to.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.device.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.diag.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.diag_part.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.errors.AlreadyExistsError.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.errors.InternalError.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.errors.OutOfRangeError.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.errors.UnavailableError.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.fft3d.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.floor.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.floordiv.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.foldl.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.gather.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.get_default_session.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.get_session_handle.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.get_variable_scope.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.greater_equal.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.histogram_fixed_width.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.ifft.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.imag.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.central_crop.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.crop_to_bounding_box.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.decode_png.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.draw_bounding_boxes.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.per_image_whitening.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.random_flip_left_right.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.resize_image_with_crop_or_pad.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.transpose_image.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.initialize_all_variables.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.invert_permutation.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.is_nan.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.is_numeric_tensor.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.load_op_library.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.logical_and.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.map_fn.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.matmul.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.matrix_determinant.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.matrix_solve_ls.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.minimum.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.avg_pool.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.batch_normalization.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.embedding_lookup.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.learned_unigram_candidate_sampler.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.log_softmax.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.max_pool3d.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.moments.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.relu.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.softmax.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.softsign.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.sufficient_statistics.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.zero_fraction.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.ones.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.python_io.tf_record_iterator.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.random_crop.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.random_uniform_initializer.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.range.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.reduce_all.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.reverse_sequence.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.sign.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.sin.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.space_to_depth.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.sparse_mask.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.string_to_hash_bucket_strong.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.tanh.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.test.main.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.to_double.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.to_float.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.trace.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.FtrlOptimizer.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.LooperThread.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.MomentumOptimizer.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.RMSPropOptimizer.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.Saver.from_proto.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.Server.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.SummaryWriter.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.Supervisor.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.generate_checkpoint_state_proto.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.input_producer.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.limit_epochs.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.shuffle_batch.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.summary_iterator.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.trainable_variables.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.transpose.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.variable_axis_size_partitioner.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.Dimension.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.FixedLenSequenceFeature.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.Graph.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.GraphKeys.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.OpError.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.Print.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.SparseTensor.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.TextLineReader.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.VarLenFeature.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.add_n.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.argmax.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.argmin.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.assert_positive.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.assert_proper_iterable.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.audio_summary.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.batch_ifft.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.batch_matrix_diag.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.batch_matrix_diag_part.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.batch_matrix_solve.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.batch_matrix_solve_ls.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.batch_matrix_triangular_solve.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.clip_by_average_norm.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.clip_by_global_norm.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.clip_by_norm.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.constant_initializer.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.ContinuousDistribution.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.DirichletMultinomial.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.Exponential.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.Gamma.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.Normal.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.normal_conjugates_known_sigma_posterior.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.ffmpeg.decode_audio.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.learn.ModeKeys.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.learn.extract_pandas_labels.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.accuracy.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.set_intersection.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.streaming_mean.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.streaming_precision.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.streaming_recall.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.convert_to_tensor_or_indexed_slices.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.cos.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.count_up_to.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.cross.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.decode_raw.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.diag.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.errors.AbortedError.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.errors.FailedPreconditionError.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.get_default_graph.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.gradients.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.histogram_summary.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.adjust_contrast.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.adjust_hue.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.central_crop.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.crop_to_bounding_box.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.decode_jpeg.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.encode_png.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.hsv_to_rgb.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.resize_bicubic.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.resize_bilinear.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.sample_distorted_bounding_box.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image_summary.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.initialize_variables.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.less.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.less_equal.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.load_file_system_library.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.logical_not.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.logical_xor.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.make_template.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.minimum.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.multinomial.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.name_scope.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.avg_pool.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.bias_add.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.compute_accidental_hits.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.depthwise_conv2d_native.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.elu.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.fixed_unigram_candidate_sampler.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.local_response_normalization.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.separable_conv2d.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.sigmoid_cross_entropy_with_logits.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.softsign.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.sparse_softmax_cross_entropy_with_logits.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.no_regularizer.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.one_hot.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.ones_initializer.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.op_scope.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.py_func.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.random_normal.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.random_normal_initializer.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.reduce_all.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.reduce_mean.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.register_tensor_conversion_function.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.rsqrt.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.scan.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.scatter_sub.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.segment_min.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.segment_sum.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.shape_n.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_fill_empty_rows.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_placeholder.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_reorder.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_segment_sum.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_split.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.string_to_hash_bucket.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.test.get_temp_dir.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.test.is_built_with_cuda.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.tile.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.to_int64.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.AdagradOptimizer.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.AdamOptimizer.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.Coordinator.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.Server.create_local_server.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.add_queue_runner.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.exponential_decay.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.global_step.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.import_meta_graph.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.latest_checkpoint.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.match_filenames_once.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.summary_iterator.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.trainable_variables.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.unique.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.verify_tensor_all_finite.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.zeta.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.DeviceSpec.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.FixedLenFeature.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.InteractiveSession.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.OpError.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.Operation.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.Print.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.ReaderBase.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.RegisterShape.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.TensorShape.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.VarLenFeature.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.abs.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.add.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.all_variables.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.as_dtype.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.assert_less.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.batch_cholesky_solve.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.batch_fft.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.batch_ifft3d.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.batch_matrix_diag_part.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.bitcast.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.boolean_mask.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.case.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.cholesky.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.copy_graph.get_copied_op.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.distributions.DirichletMultinomial.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.distributions.Normal.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.ffmpeg.encode_audio.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.layers.optimize_loss.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.BaseEstimator.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.TensorFlowRNNClassifier.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.TensorFlowRegressor.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.extract_dask_data.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.extract_pandas_labels.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.metrics.set_difference.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.metrics.set_size.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.metrics.streaming_mean_absolute_error.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.metrics.streaming_recall.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.metrics.streaming_root_mean_squared_error.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.cos.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.decode_raw.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.dynamic_stitch.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.edit_distance.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.erfc.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.errors.InvalidArgumentError.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.errors.UnimplementedError.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.errors.UnknownError.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.fft2d.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.foldr.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.get_variable.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.igammac.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.image.random_contrast.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.image.resize_images.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.image.resize_nearest_neighbor.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.image.rgb_to_hsv.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.image.sample_distorted_bounding_box.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.lgamma.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.matching_files.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.merge_all_summaries.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.moving_average_variables.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.name_scope.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.max_pool.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.nce_loss.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.relu6.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.softplus.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.no_op.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.not_equal.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.ones_initializer.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.polygamma.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.python_io.TFRecordWriter.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.random_normal.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.random_normal_initializer.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.reduce_any.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.reduce_sum.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.reset_default_graph.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.rsqrt.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.scalar_mul.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.set_random_seed.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.size.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.slice.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sparse_reorder.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sparse_segment_sqrt_n.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sparse_segment_sqrt_n_grad.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sparse_segment_sum.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sparse_split.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sparse_to_indicator.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sqrt.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.squeeze.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.stop_gradient.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.test.compute_gradient.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.AdamOptimizer.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.ClusterSpec.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.ExponentialMovingAverage.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.Optimizer.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.QueueRunner.from_proto.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.QueueRunner.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.batch.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.export_meta_graph.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.latest_checkpoint.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.update_checkpoint_state.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.uniform_unit_scaling_initializer.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.where.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.zeros.md diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.AggregationMethod.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.AggregationMethod.md new file mode 100644 index 0000000000..ee655fbd25 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.AggregationMethod.md @@ -0,0 +1,10 @@ +A class listing aggregation methods used to combine gradients. + +Computing partial derivatives can require aggregating gradient +contributions. This class lists the various methods that can +be used to combine gradients in the graph: + +* `ADD_N`: All of the gradient terms are summed as part of one + operation using the "AddN" op. It has the property that all + gradients must be ready before any aggregation is performed. +* `DEFAULT`: The system-chosen default aggregation method. diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.Assert.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.Assert.md new file mode 100644 index 0000000000..6471b9aea4 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.Assert.md @@ -0,0 +1,29 @@ +### `tf.Assert(condition, data, summarize=None, name=None)` {#Assert} + +Asserts that the given condition is true. + +If `condition` evaluates to false, print the list of tensors in `data`. +`summarize` determines how many entries of the tensors to print. + +NOTE: To ensure that Assert executes, one usually attaches a dependency: + +```python + # Ensure maximum element of x is smaller or equal to 1 +assert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x]) +x = tf.with_dependencies([assert_op], x) +``` + +##### Args: + + +* `condition`: The condition to evaluate. +* `data`: The tensors to print out when condition is false. +* `summarize`: Print this many entries of each tensor. +* `name`: A name for this operation (optional). + +##### Returns: + + +* `assert_op`: An `Operation` that, when executed, raises a + `tf.errors.InvalidArgumentError` if `condition` is not true. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.FIFOQueue.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.FIFOQueue.md new file mode 100644 index 0000000000..129107384f --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.FIFOQueue.md @@ -0,0 +1,41 @@ +A queue implementation that dequeues elements in first-in-first out order. + +See [`tf.QueueBase`](#QueueBase) for a description of the methods on +this class. + +- - - + +#### `tf.FIFOQueue.__init__(capacity, dtypes, shapes=None, names=None, shared_name=None, name='fifo_queue')` {#FIFOQueue.__init__} + +Creates a queue that dequeues elements in a first-in first-out order. + +A `FIFOQueue` has bounded capacity; supports multiple concurrent +producers and consumers; and provides exactly-once delivery. + +A `FIFOQueue` holds a list of up to `capacity` elements. Each +element is a fixed-length tuple of tensors whose dtypes are +described by `dtypes`, and whose shapes are optionally described +by the `shapes` argument. + +If the `shapes` argument is specified, each component of a queue +element must have the respective fixed shape. If it is +unspecified, different queue elements may have different shapes, +but the use of `dequeue_many` is disallowed. + +##### Args: + + +* `capacity`: An integer. The upper bound on the number of elements + that may be stored in this queue. +* `dtypes`: A list of `DType` objects. The length of `dtypes` must equal + the number of tensors in each queue element. +* `shapes`: (Optional.) A list of fully-defined `TensorShape` objects + with the same length as `dtypes`, or `None`. +* `names`: (Optional.) A list of string naming the components in the queue + with the same length as `dtypes`, or `None`. If specified the dequeue + methods return a dictionary with the names as keys. +* `shared_name`: (Optional.) If non-empty, this queue will be shared under + the given name across multiple sessions. +* `name`: Optional name for the queue operation. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.FixedLenFeature.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.FixedLenFeature.md new file mode 100644 index 0000000000..0ae54940a8 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.FixedLenFeature.md @@ -0,0 +1,31 @@ +Configuration for parsing a fixed-length input feature. + +To treat sparse input as dense, provide a `default_value`; otherwise, +the parse functions will fail on any examples missing this feature. + +Fields: + shape: Shape of input data. + dtype: Data type of input. + default_value: Value to be used if an example is missing this feature. It + must be compatible with `dtype`. +- - - + +#### `tf.FixedLenFeature.default_value` {#FixedLenFeature.default_value} + +Alias for field number 2 + + +- - - + +#### `tf.FixedLenFeature.dtype` {#FixedLenFeature.dtype} + +Alias for field number 1 + + +- - - + +#### `tf.FixedLenFeature.shape` {#FixedLenFeature.shape} + +Alias for field number 0 + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.FixedLengthRecordReader.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.FixedLengthRecordReader.md deleted file mode 100644 index e8a94cb825..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.FixedLengthRecordReader.md +++ /dev/null @@ -1,148 +0,0 @@ -A Reader that outputs fixed-length records from a file. - -See ReaderBase for supported methods. -- - - - -#### `tf.FixedLengthRecordReader.__init__(record_bytes, header_bytes=None, footer_bytes=None, name=None)` {#FixedLengthRecordReader.__init__} - -Create a FixedLengthRecordReader. - -##### Args: - - -* `record_bytes`: An int. -* `header_bytes`: An optional int. Defaults to 0. -* `footer_bytes`: An optional int. Defaults to 0. -* `name`: A name for the operation (optional). - - -- - - - -#### `tf.FixedLengthRecordReader.num_records_produced(name=None)` {#FixedLengthRecordReader.num_records_produced} - -Returns the number of records this reader has produced. - -This is the same as the number of Read executions that have -succeeded. - -##### Args: - - -* `name`: A name for the operation (optional). - -##### Returns: - - An int64 Tensor. - - -- - - - -#### `tf.FixedLengthRecordReader.num_work_units_completed(name=None)` {#FixedLengthRecordReader.num_work_units_completed} - -Returns the number of work units this reader has finished processing. - -##### Args: - - -* `name`: A name for the operation (optional). - -##### Returns: - - An int64 Tensor. - - -- - - - -#### `tf.FixedLengthRecordReader.read(queue, name=None)` {#FixedLengthRecordReader.read} - -Returns the next record (key, value pair) produced by a reader. - -Will dequeue a work unit from queue if necessary (e.g. when the -Reader needs to start reading from a new file since it has -finished with the previous file). - -##### Args: - - -* `queue`: A Queue or a mutable string Tensor representing a handle - to a Queue, with string work items. -* `name`: A name for the operation (optional). - -##### Returns: - - A tuple of Tensors (key, value). - -* `key`: A string scalar Tensor. -* `value`: A string scalar Tensor. - - -- - - - -#### `tf.FixedLengthRecordReader.reader_ref` {#FixedLengthRecordReader.reader_ref} - -Op that implements the reader. - - -- - - - -#### `tf.FixedLengthRecordReader.reset(name=None)` {#FixedLengthRecordReader.reset} - -Restore a reader to its initial clean state. - -##### Args: - - -* `name`: A name for the operation (optional). - -##### Returns: - - The created Operation. - - -- - - - -#### `tf.FixedLengthRecordReader.restore_state(state, name=None)` {#FixedLengthRecordReader.restore_state} - -Restore a reader to a previously saved state. - -Not all Readers support being restored, so this can produce an -Unimplemented error. - -##### Args: - - -* `state`: A string Tensor. - Result of a SerializeState of a Reader with matching type. -* `name`: A name for the operation (optional). - -##### Returns: - - The created Operation. - - -- - - - -#### `tf.FixedLengthRecordReader.serialize_state(name=None)` {#FixedLengthRecordReader.serialize_state} - -Produce a string tensor that encodes the state of a reader. - -Not all Readers support being serialized, so this can produce an -Unimplemented error. - -##### Args: - - -* `name`: A name for the operation (optional). - -##### Returns: - - A string Tensor. - - -- - - - -#### `tf.FixedLengthRecordReader.supports_serialize` {#FixedLengthRecordReader.supports_serialize} - -Whether the Reader implementation can serialize its state. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.NoGradient.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.NoGradient.md deleted file mode 100644 index 15c40e6828..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.NoGradient.md +++ /dev/null @@ -1,23 +0,0 @@ -### `tf.NoGradient(op_type)` {#NoGradient} - -Specifies that ops of type `op_type` do not have a defined gradient. - -This function is only used when defining a new op type. It may be -used for ops such as `tf.size()` that are not differentiable. For -example: - -```python -tf.NoGradient("Size") -``` - -##### Args: - - -* `op_type`: The string type of an operation. This corresponds to the - `OpDef.name` field for the proto that defines the operation. - -##### Raises: - - -* `TypeError`: If `op_type` is not a string. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.RegisterGradient.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.RegisterGradient.md deleted file mode 100644 index 736bd5b4af..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.RegisterGradient.md +++ /dev/null @@ -1,36 +0,0 @@ -A decorator for registering the gradient function for an op type. - -This decorator is only used when defining a new op type. For an op -with `m` inputs and `n` outputs, the gradient function is a function -that takes the original `Operation` and `n` `Tensor` objects -(representing the gradients with respect to each output of the op), -and returns `m` `Tensor` objects (representing the partial gradients -with respect to each input of the op). - -For example, assuming that operations of type `"Sub"` take two -inputs `x` and `y`, and return a single output `x - y`, the -following gradient function would be registered: - -```python -@tf.RegisterGradient("Sub") -def _sub_grad(unused_op, grad): - return grad, tf.neg(grad) -``` - -The decorator argument `op_type` is the string type of an -operation. This corresponds to the `OpDef.name` field for the proto -that defines the operation. - -- - - - -#### `tf.RegisterGradient.__init__(op_type)` {#RegisterGradient.__init__} - -Creates a new decorator with `op_type` as the Operation type. - -##### Args: - - -* `op_type`: The string type of an operation. This corresponds to the - `OpDef.name` field for the proto that defines the operation. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.SparseTensorValue.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.SparseTensorValue.md deleted file mode 100644 index efa3314f23..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.SparseTensorValue.md +++ /dev/null @@ -1,22 +0,0 @@ -SparseTensorValue(indices, values, shape) -- - - - -#### `tf.SparseTensorValue.indices` {#SparseTensorValue.indices} - -Alias for field number 0 - - -- - - - -#### `tf.SparseTensorValue.shape` {#SparseTensorValue.shape} - -Alias for field number 2 - - -- - - - -#### `tf.SparseTensorValue.values` {#SparseTensorValue.values} - -Alias for field number 1 - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.TensorShape.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.TensorShape.md deleted file mode 100644 index 506f44d838..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.TensorShape.md +++ /dev/null @@ -1,316 +0,0 @@ -Represents the shape of a `Tensor`. - -A `TensorShape` represents a possibly-partial shape specification for a -`Tensor`. It may be one of the following: - -* *Fully-known shape:* has a known number of dimensions and a known size - for each dimension. -* *Partially-known shape:* has a known number of dimensions, and an unknown - size for one or more dimension. -* *Unknown shape:* has an unknown number of dimensions, and an unknown - size in all dimensions. - -If a tensor is produced by an operation of type `"Foo"`, its shape -may be inferred if there is a registered shape function for -`"Foo"`. See [`tf.RegisterShape()`](../../api_docs/python/framework.md#RegisterShape) -for details of shape -functions and how to register them. Alternatively, the shape may be set -explicitly using [`Tensor.set_shape()`](../../api_docs/python/framework.md#Tensor.set_shape). - -- - - - -#### `tf.TensorShape.merge_with(other)` {#TensorShape.merge_with} - -Returns a `TensorShape` combining the information in `self` and `other`. - -The dimensions in `self` and `other` are merged elementwise, -according to the rules defined for `Dimension.merge_with()`. - -##### Args: - - -* `other`: Another `TensorShape`. - -##### Returns: - - A `TensorShape` containing the combined information of `self` and - `other`. - -##### Raises: - - -* `ValueError`: If `self` and `other` are not compatible. - - -- - - - -#### `tf.TensorShape.concatenate(other)` {#TensorShape.concatenate} - -Returns the concatenation of the dimension in `self` and `other`. - -*N.B.* If either `self` or `other` is completely unknown, -concatenation will discard information about the other shape. In -future, we might support concatenation that preserves this -information for use with slicing. - -##### Args: - - -* `other`: Another `TensorShape`. - -##### Returns: - - A `TensorShape` whose dimensions are the concatenation of the - dimensions in `self` and `other`. - - - -- - - - -#### `tf.TensorShape.ndims` {#TensorShape.ndims} - -Returns the rank of this shape, or None if it is unspecified. - - -- - - - -#### `tf.TensorShape.dims` {#TensorShape.dims} - -Returns a list of Dimensions, or None if the shape is unspecified. - - -- - - - -#### `tf.TensorShape.as_list()` {#TensorShape.as_list} - -Returns a list of integers or None for each dimension. - -##### Returns: - - A list of integers or None for each dimension. - - -- - - - -#### `tf.TensorShape.as_proto()` {#TensorShape.as_proto} - -Returns this shape as a `TensorShapeProto`. - - -- - - - -#### `tf.TensorShape.is_compatible_with(other)` {#TensorShape.is_compatible_with} - -Returns True iff `self` is compatible with `other`. - -Two possibly-partially-defined shapes are compatible if there -exists a fully-defined shape that both shapes can represent. Thus, -compatibility allows the shape inference code to reason about -partially-defined shapes. For example: - -* TensorShape(None) is compatible with all shapes. - -* TensorShape([None, None]) is compatible with all two-dimensional - shapes, such as TensorShape([32, 784]), and also TensorShape(None). It is - not compatible with, for example, TensorShape([None]) or - TensorShape([None, None, None]). - -* TensorShape([32, None]) is compatible with all two-dimensional shapes - with size 32 in the 0th dimension, and also TensorShape([None, None]) - and TensorShape(None). It is not compatible with, for example, - TensorShape([32]), TensorShape([32, None, 1]) or TensorShape([64, None]). - -* TensorShape([32, 784]) is compatible with itself, and also - TensorShape([32, None]), TensorShape([None, 784]), TensorShape([None, - None]) and TensorShape(None). It is not compatible with, for example, - TensorShape([32, 1, 784]) or TensorShape([None]). - -The compatibility relation is reflexive and symmetric, but not -transitive. For example, TensorShape([32, 784]) is compatible with -TensorShape(None), and TensorShape(None) is compatible with -TensorShape([4, 4]), but TensorShape([32, 784]) is not compatible with -TensorShape([4, 4]). - -##### Args: - - -* `other`: Another TensorShape. - -##### Returns: - - True iff `self` is compatible with `other`. - - -- - - - -#### `tf.TensorShape.is_fully_defined()` {#TensorShape.is_fully_defined} - -Returns True iff `self` is fully defined in every dimension. - - - -- - - - -#### `tf.TensorShape.with_rank(rank)` {#TensorShape.with_rank} - -Returns a shape based on `self` with the given rank. - -This method promotes a completely unknown shape to one with a -known rank. - -##### Args: - - -* `rank`: An integer. - -##### Returns: - - A shape that is at least as specific as `self` with the given rank. - -##### Raises: - - -* `ValueError`: If `self` does not represent a shape with the given `rank`. - - -- - - - -#### `tf.TensorShape.with_rank_at_least(rank)` {#TensorShape.with_rank_at_least} - -Returns a shape based on `self` with at least the given rank. - -##### Args: - - -* `rank`: An integer. - -##### Returns: - - A shape that is at least as specific as `self` with at least the given - rank. - -##### Raises: - - -* `ValueError`: If `self` does not represent a shape with at least the given - `rank`. - - -- - - - -#### `tf.TensorShape.with_rank_at_most(rank)` {#TensorShape.with_rank_at_most} - -Returns a shape based on `self` with at most the given rank. - -##### Args: - - -* `rank`: An integer. - -##### Returns: - - A shape that is at least as specific as `self` with at most the given - rank. - -##### Raises: - - -* `ValueError`: If `self` does not represent a shape with at most the given - `rank`. - - - -- - - - -#### `tf.TensorShape.assert_has_rank(rank)` {#TensorShape.assert_has_rank} - -Raises an exception if `self` is not compatible with the given `rank`. - -##### Args: - - -* `rank`: An integer. - -##### Raises: - - -* `ValueError`: If `self` does not represent a shape with the given `rank`. - - -- - - - -#### `tf.TensorShape.assert_same_rank(other)` {#TensorShape.assert_same_rank} - -Raises an exception if `self` and `other` do not have compatible ranks. - -##### Args: - - -* `other`: Another `TensorShape`. - -##### Raises: - - -* `ValueError`: If `self` and `other` do not represent shapes with the - same rank. - - -- - - - -#### `tf.TensorShape.assert_is_compatible_with(other)` {#TensorShape.assert_is_compatible_with} - -Raises exception if `self` and `other` do not represent the same shape. - -This method can be used to assert that there exists a shape that both -`self` and `other` represent. - -##### Args: - - -* `other`: Another TensorShape. - -##### Raises: - - -* `ValueError`: If `self` and `other` do not represent the same shape. - - -- - - - -#### `tf.TensorShape.assert_is_fully_defined()` {#TensorShape.assert_is_fully_defined} - -Raises an exception if `self` is not fully defined in every dimension. - -##### Raises: - - -* `ValueError`: If `self` does not have a known value for every dimension. - - - -#### Other Methods -- - - - -#### `tf.TensorShape.__init__(dims)` {#TensorShape.__init__} - -Creates a new TensorShape with the given dimensions. - -##### Args: - - -* `dims`: A list of Dimensions, or None if the shape is unspecified. -* `DEPRECATED`: A single integer is treated as a singleton list. - -##### Raises: - - -* `TypeError`: If dims cannot be converted to a list of dimensions. - - -- - - - -#### `tf.TensorShape.num_elements()` {#TensorShape.num_elements} - -Returns the total number of elements, or none for incomplete shapes. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.accumulate_n.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.accumulate_n.md deleted file mode 100644 index a85d0d7f87..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.accumulate_n.md +++ /dev/null @@ -1,37 +0,0 @@ -### `tf.accumulate_n(inputs, shape=None, tensor_dtype=None, name=None)` {#accumulate_n} - -Returns the element-wise sum of a list of tensors. - -Optionally, pass `shape` and `tensor_dtype` for shape and type checking, -otherwise, these are inferred. - -For example: - -```python -# tensor 'a' is [[1, 2], [3, 4]] -# tensor `b` is [[5, 0], [0, 6]] -tf.accumulate_n([a, b, a]) ==> [[7, 4], [6, 14]] - -# Explicitly pass shape and type -tf.accumulate_n([a, b, a], shape=[2, 2], tensor_dtype=tf.int32) - ==> [[7, 4], [6, 14]] -``` - -##### Args: - - -* `inputs`: A list of `Tensor` objects, each with same shape and type. -* `shape`: Shape of elements of `inputs`. -* `tensor_dtype`: The type of `inputs`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of same shape and type as the elements of `inputs`. - -##### Raises: - - -* `ValueError`: If `inputs` don't all have same shape and dtype or the shape - cannot be inferred. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.assert_equal.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.assert_equal.md deleted file mode 100644 index ea4fd3a1fd..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.assert_equal.md +++ /dev/null @@ -1,35 +0,0 @@ -### `tf.assert_equal(x, y, data=None, summarize=None, name=None)` {#assert_equal} - -Assert the condition `x == y` holds element-wise. - -Example of adding a dependency to an operation: - -```python -with tf.control_dependencies([tf.assert_equal(x, y)]): - output = tf.reduce_sum(x) -``` - -Example of adding dependency to the tensor being checked: - -```python -x = tf.with_dependencies([tf.assert_equal(x, y)], x) -``` - -This condition holds if for every pair of (possibly broadcast) elements -`x[i]`, `y[i]`, we have `x[i] == y[i]`. -If both `x` and `y` are empty, this is trivially satisfied. - -##### Args: - - -* `x`: Numeric `Tensor`. -* `y`: Numeric `Tensor`, same dtype as and broadcastable to `x`. -* `data`: The tensors to print out if the condition is False. Defaults to - error message and first few entries of `x`, `y`. -* `summarize`: Print this many entries of each tensor. -* `name`: A name for this operation (optional). Defaults to "assert_equal". - -##### Returns: - - Op that raises `InvalidArgumentError` if `x == y` is False. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.assert_less.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.assert_less.md deleted file mode 100644 index eb43a62444..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.assert_less.md +++ /dev/null @@ -1,35 +0,0 @@ -### `tf.assert_less(x, y, data=None, summarize=None, name=None)` {#assert_less} - -Assert the condition `x < y` holds element-wise. - -Example of adding a dependency to an operation: - -```python -with tf.control_dependencies([tf.assert_less(x, y)]): - output = tf.reduce_sum(x) -``` - -Example of adding dependency to the tensor being checked: - -```python -x = tf.with_dependencies([tf.assert_less(x, y)], x) -``` - -This condition holds if for every pair of (possibly broadcast) elements -`x[i]`, `y[i]`, we have `x[i] < y[i]`. -If both `x` and `y` are empty, this is trivially satisfied. - -##### Args: - - -* `x`: Numeric `Tensor`. -* `y`: Numeric `Tensor`, same dtype as and broadcastable to `x`. -* `data`: The tensors to print out if the condition is False. Defaults to - error message and first few entries of `x`, `y`. -* `summarize`: Print this many entries of each tensor. -* `name`: A name for this operation (optional). Defaults to "assert_less". - -##### Returns: - - Op that raises `InvalidArgumentError` if `x < y` is False. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.assert_negative.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.assert_negative.md deleted file mode 100644 index 81daebec0d..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.assert_negative.md +++ /dev/null @@ -1,33 +0,0 @@ -### `tf.assert_negative(x, data=None, summarize=None, name=None)` {#assert_negative} - -Assert the condition `x < 0` holds element-wise. - -Example of adding a dependency to an operation: - -```python -with tf.control_dependencies([tf.assert_negative(x)]): - output = tf.reduce_sum(x) -``` - -Example of adding dependency to the tensor being checked: - -```python -x = tf.with_dependencies([tf.assert_negative(x)], x) -``` - -Negative means, for every element `x[i]` of `x`, we have `x[i] < 0`. -If `x` is empty this is trivially satisfied. - -##### Args: - - -* `x`: Numeric `Tensor`. -* `data`: The tensors to print out if the condition is False. Defaults to - error message and first few entries of `x`. -* `summarize`: Print this many entries of each tensor. -* `name`: A name for this operation (optional). Defaults to "assert_negative". - -##### Returns: - - Op raising `InvalidArgumentError` unless `x` is all negative. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.assert_non_negative.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.assert_non_negative.md deleted file mode 100644 index 47f07a698a..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.assert_non_negative.md +++ /dev/null @@ -1,34 +0,0 @@ -### `tf.assert_non_negative(x, data=None, summarize=None, name=None)` {#assert_non_negative} - -Assert the condition `x >= 0` holds element-wise. - -Example of adding a dependency to an operation: - -```python -with tf.control_dependencies([tf.assert_non_negative(x)]): - output = tf.reduce_sum(x) -``` - -Example of adding dependency to the tensor being checked: - -```python -x = tf.with_dependencies([tf.assert_non_negative(x)], x) -``` - -Non-negative means, for every element `x[i]` of `x`, we have `x[i] >= 0`. -If `x` is empty this is trivially satisfied. - -##### Args: - - -* `x`: Numeric `Tensor`. -* `data`: The tensors to print out if the condition is False. Defaults to - error message and first few entries of `x`. -* `summarize`: Print this many entries of each tensor. -* `name`: A name for this operation (optional). - Defaults to "assert_non_negative". - -##### Returns: - - Op raising `InvalidArgumentError` unless `x` is all non-negative. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.assert_rank_at_least.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.assert_rank_at_least.md deleted file mode 100644 index 1b33f3401b..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.assert_rank_at_least.md +++ /dev/null @@ -1,37 +0,0 @@ -### `tf.assert_rank_at_least(x, rank, data=None, summarize=None, name=None)` {#assert_rank_at_least} - -Assert `x` has rank equal to `rank` or higher. - -Example of adding a dependency to an operation: - -```python -with tf.control_dependencies([tf.assert_rank_at_least(x, 2)]): - output = tf.reduce_sum(x) -``` - -Example of adding dependency to the tensor being checked: - -```python -x = tf.with_dependencies([tf.assert_rank_at_least(x, 2)], x) -``` - -##### Args: - - -* `x`: Numeric `Tensor`. -* `rank`: Scalar `Tensor`. -* `data`: The tensors to print out if the condition is False. Defaults to - error message and first few entries of `x`. -* `summarize`: Print this many entries of each tensor. -* `name`: A name for this operation (optional). - Defaults to "assert_rank_at_least". - -##### Returns: - - Op raising `InvalidArgumentError` unless `x` has specified rank or higher. - -##### Raises: - - -* `ValueError`: If static checks determine `x` has wrong rank. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.assert_type.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.assert_type.md new file mode 100644 index 0000000000..e98b9dc4af --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.assert_type.md @@ -0,0 +1,15 @@ +### `tf.assert_type(tensor, tf_type)` {#assert_type} + +Asserts that the given `Tensor` is of the specified type. + +##### Args: + + +* `tensor`: A tensorflow `Tensor`. +* `tf_type`: A tensorflow type (dtypes.float32, tf.int64, dtypes.bool, etc). + +##### Raises: + + +* `ValueError`: If the tensors data type doesn't match tf_type. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.batch_ifft2d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.batch_ifft2d.md new file mode 100644 index 0000000000..4476637122 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.batch_ifft2d.md @@ -0,0 +1,18 @@ +### `tf.batch_ifft2d(input, name=None)` {#batch_ifft2d} + +Compute the inverse 2-dimensional discrete Fourier Transform over the inner-most + +2 dimensions of `input`. + +##### Args: + + +* `input`: A `Tensor` of type `complex64`. A complex64 tensor. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `complex64`. + A complex64 tensor of the same shape as `input`. The inner-most 2 + dimensions of `input` are replaced with their inverse 2D Fourier Transform. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.batch_matrix_band_part.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.batch_matrix_band_part.md new file mode 100644 index 0000000000..d9c208a460 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.batch_matrix_band_part.md @@ -0,0 +1,60 @@ +### `tf.batch_matrix_band_part(input, num_lower, num_upper, name=None)` {#batch_matrix_band_part} + +Copy a tensor setting everything outside a central band in each innermost matrix + +to zero. + +The `band` part is computed as follows: +Assume `input` has `k` dimensions `[I, J, K, ..., M, N]`, then the output is a +tensor with the same shape where + +`band[i, j, k, ..., m, n] = in_band(m, n) * input[i, j, k, ..., m, n]`. + +The indicator function 'in_band(m, n)` is one if +`(num_lower < 0 || (m-n) <= num_lower)) && +(num_upper < 0 || (n-m) <= num_upper)`, and zero otherwise. + +For example: + +```prettyprint +# if 'input' is [[ 0, 1, 2, 3] + [-1, 0, 1, 2] + [-2, -1, 0, 1] + [-3, -2, -1, 0]], + +tf.batch_matrix_band_part(input, 1, -1) ==> [[ 0, 1, 2, 3] + [-1, 0, 1, 2] + [ 0, -1, 0, 1] + [ 0, 0, -1, 0]], + +tf.batch_matrix_band_part(input, 2, 1) ==> [[ 0, 1, 0, 0] + [-1, 0, 1, 0] + [-2, -1, 0, 1] + [ 0, -2, -1, 0]] +``` + +Useful special cases: + +```prettyprint + tf.batch_matrix_band_part(input, 0, -1) ==> Upper triangular part. + tf.batch_matrix_band_part(input, -1, 0) ==> Lower triangular part. + tf.batch_matrix_band_part(input, 0, 0) ==> Diagonal. +``` + +##### Args: + + +* `input`: A `Tensor`. Rank `k` tensor. +* `num_lower`: A `Tensor` of type `int64`. + 0-D tensor. Number of subdiagonals to keep. If negative, keep entire + lower triangle. +* `num_upper`: A `Tensor` of type `int64`. + 0-D tensor. Number of superdiagonals to keep. If negative, keep + entire upper triangle. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `input`. + Rank `k` tensor of the same shape as input. The extracted banded tensor. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.batch_to_space.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.batch_to_space.md deleted file mode 100644 index d4a66ac8e0..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.batch_to_space.md +++ /dev/null @@ -1,37 +0,0 @@ -### `tf.batch_to_space(input, crops, block_size, name=None)` {#batch_to_space} - -BatchToSpace for 4-D tensors of type T. - -Rearranges (permutes) data from batch into blocks of spatial data, followed by -cropping. This is the reverse transformation of SpaceToBatch. More specifically, -this op outputs a copy of the input tensor where values from the `batch` -dimension are moved in spatial blocks to the `height` and `width` dimensions, -followed by cropping along the `height` and `width` dimensions. - -##### Args: - - -* `input`: A `Tensor`. 4-D tensor with shape - `[batch*block_size*block_size, height_pad/block_size, width_pad/block_size, - depth]`. Note that the batch size of the input tensor must be divisible by - `block_size * block_size`. -* `crops`: A `Tensor` of type `int32`. - 2-D tensor of non-negative integers with shape `[2, 2]`. It specifies - how many elements to crop from the intermediate result across the spatial - dimensions as follows: - - crops = [[crop_top, crop_bottom], [crop_left, crop_right]] - -* `block_size`: An `int`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `input`. - 4-D with shape `[batch, height, width, depth]`, where: - - height = height_pad - crop_top - crop_bottom - width = width_pad - crop_left - crop_right - - The attr `block_size` must be greater than one. It indicates the block size. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.bytes.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.bytes.md new file mode 100644 index 0000000000..5353507e39 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.bytes.md @@ -0,0 +1,4 @@ +str(object='') -> string + +Return a nice string representation of the object. +If the argument is a string, the return value is the same object. diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.concat.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.concat.md new file mode 100644 index 0000000000..c54a7503da --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.concat.md @@ -0,0 +1,45 @@ +### `tf.concat(concat_dim, values, name='concat')` {#concat} + +Concatenates tensors along one dimension. + +Concatenates the list of tensors `values` along dimension `concat_dim`. If +`values[i].shape = [D0, D1, ... Dconcat_dim(i), ...Dn]`, the concatenated +result has shape + + [D0, D1, ... Rconcat_dim, ...Dn] + +where + + Rconcat_dim = sum(Dconcat_dim(i)) + +That is, the data from the input tensors is joined along the `concat_dim` +dimension. + +The number of dimensions of the input tensors must match, and all dimensions +except `concat_dim` must be equal. + +For example: + +```python +t1 = [[1, 2, 3], [4, 5, 6]] +t2 = [[7, 8, 9], [10, 11, 12]] +tf.concat(0, [t1, t2]) ==> [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]] +tf.concat(1, [t1, t2]) ==> [[1, 2, 3, 7, 8, 9], [4, 5, 6, 10, 11, 12]] + +# tensor t3 with shape [2, 3] +# tensor t4 with shape [2, 3] +tf.shape(tf.concat(0, [t3, t4])) ==> [4, 3] +tf.shape(tf.concat(1, [t3, t4])) ==> [2, 6] +``` + +##### Args: + + +* `concat_dim`: 0-D `int32` `Tensor`. Dimension along which to concatenate. +* `values`: A list of `Tensor` objects or a single `Tensor`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` resulting from concatenation of the input tensors. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.copy_graph.get_copied_op.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.copy_graph.get_copied_op.md new file mode 100644 index 0000000000..9e5a2118fd --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.copy_graph.get_copied_op.md @@ -0,0 +1,18 @@ +### `tf.contrib.copy_graph.get_copied_op(org_instance, graph, scope='')` {#get_copied_op} + +Given an `Operation` instance from some `Graph`, returns +its namesake from `graph`, under the specified scope +(default `""`). + +If a copy of `org_instance` is present in `graph` under the given +`scope`, it will be returned. + +Args: +org_instance: An `Operation` from some `Graph`. +graph: The `Graph` to be searched for a copr of `org_instance`. +scope: The scope `org_instance` is present in. + +##### Returns: + + The `Operation` copy from `graph`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.BaseDistribution.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.BaseDistribution.md deleted file mode 100644 index 65b516af08..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.BaseDistribution.md +++ /dev/null @@ -1,195 +0,0 @@ -Abstract base class for probability distributions. - -This class, along with `ContinuousDistribution` and `DiscreteDistribution`, -defines the API for probability distributions. - -Users will never instantiate a `BaseDistribution`, but will instead -instantiate subclasses of either `ContinuousDistribution` or -`DiscreteDistribution`. - -Developers of new distributions should prefer to subclass -`ContinuousDistribution` or `DiscreteDistribution`. - -### API - -The key methods for probability distributions are defined here. The likelihood -functions (`pdf`, `log_pdf`) and (`pmf`, `log_pmf`) are defined in -`ContinuousDistribution` and `DiscreteDistribution`, respectively. - -To keep ops generated by the distribution tied together by name, subclasses -should override `name` and use it to preprend names of ops in other methods -(see `cdf` for an example). - -Subclasses that wish to support `cdf` and `log_cdf` can override `log_cdf` -and use the base class's implementation for `cdf`. - -### Broadcasting, batching, and shapes - -All distributions support batches of independent distributions of that type. -The batch shape is determined by broadcasting together the parameters. - -The shape of arguments to `__init__`, `cdf`, `log_cdf`, and the likelihood -functions defined in `ContinuousDistribution` and `DiscreteDistribution` -reflect this broadcasting, as does the return value of `sample`. - -`sample_shape = (n,) + batch_shape + event_shape`, where `sample_shape` is the -shape of the `Tensor` returned from `sample`, `n` is the number of samples, -`batch_shape` defines how many independent distributions there are, and -`event_shape` defines the shape of samples from each of those independent -distributions. Samples are independent along the `batch_shape` dimensions, -but not necessarily so along the `event_shape` dimensions (dependending on -the particulars of the underlying distribution). - -Using the `Uniform` distribution as an example: - -```python -minval = 3.0 -maxval = [[4.0, 6.0], - [10.0, 12.0]] - -# Broadcasting: -# This instance represents 4 Uniform distributions. Each has a lower bound at -# 3.0 as the `minval` parameter was broadcasted to match `maxval`'s shape. -u = Uniform(minval, maxval) - -# `event_shape` is `TensorShape([])`. -event_shape = u.get_event_shape() -# `event_shape_t` is a `Tensor` which will evaluate to a scalar 1. -event_shape_t = u.event_shape - -# Sampling returns a sample per distribution. `samples` has shape -# (5, 2, 2), which is (n,) + batch_shape + event_shape, where n=5, -# batch_shape=(2, 2), and event_shape=(). -samples = u.sample(5) - -# The broadcasting holds across methods. Here we use `cdf` as an example. The -# same holds for `log_cdf` and the likelihood functions. - -# `cum_prob` has shape (2, 2) as the `value` argument was broadcasted to the -# shape of the `Uniform` instance. -cum_prob_broadcast = u.cdf(4.0) - -# `cum_prob`'s shape is (2, 2), one per distribution. No broadcasting -# occurred. -cum_prob_per_dist = u.cdf([[4.0, 5.0], - [6.0, 7.0]]) - -# INVALID as the `value` argument is not broadcastable to the distribution's -# shape. -cum_prob_invalid = u.cdf([4.0, 5.0, 6.0]) -``` -- - - - -#### `tf.contrib.distributions.BaseDistribution.batch_shape(name=None)` {#BaseDistribution.batch_shape} - -Batch dimensions of this instance as a 1-D int32 `Tensor`. - -The product of the dimensions of the `batch_shape` is the number of -independent distributions of this kind the instance represents. - -##### Args: - - -* `name`: name to give to the op - -##### Returns: - - `Tensor` `batch_shape` - - -- - - - -#### `tf.contrib.distributions.BaseDistribution.cdf(value, name='cdf')` {#BaseDistribution.cdf} - -Cumulative distribution function. - - -- - - - -#### `tf.contrib.distributions.BaseDistribution.dtype` {#BaseDistribution.dtype} - -dtype of samples from this distribution. - - -- - - - -#### `tf.contrib.distributions.BaseDistribution.entropy(name=None)` {#BaseDistribution.entropy} - -Entropy of the distribution in nats. - - -- - - - -#### `tf.contrib.distributions.BaseDistribution.event_shape(name=None)` {#BaseDistribution.event_shape} - -Shape of a sample from a single distribution as a 1-D int32 `Tensor`. - -##### Args: - - -* `name`: name to give to the op - -##### Returns: - - `Tensor` `event_shape` - - -- - - - -#### `tf.contrib.distributions.BaseDistribution.get_batch_shape()` {#BaseDistribution.get_batch_shape} - -`TensorShape` available at graph construction time. - -Same meaning as `batch_shape`. May be only partially defined. - - -- - - - -#### `tf.contrib.distributions.BaseDistribution.get_event_shape()` {#BaseDistribution.get_event_shape} - -`TensorShape` available at graph construction time. - -Same meaning as `event_shape`. May be only partially defined. - - -- - - - -#### `tf.contrib.distributions.BaseDistribution.log_cdf(value, name='log_cdf')` {#BaseDistribution.log_cdf} - -Log CDF. - - -- - - - -#### `tf.contrib.distributions.BaseDistribution.mean` {#BaseDistribution.mean} - - - - -- - - - -#### `tf.contrib.distributions.BaseDistribution.name` {#BaseDistribution.name} - -Name to prepend to all ops. - - -- - - - -#### `tf.contrib.distributions.BaseDistribution.sample(n, seed=None, name=None)` {#BaseDistribution.sample} - -Generate `n` samples. - -##### Args: - - -* `n`: scalar. Number of samples to draw from each distribution. -* `seed`: Python integer seed for RNG -* `name`: name to give to the op. - -##### Returns: - - -* `samples`: a `Tensor` of shape `(n,) + self.batch_shape + self.event_shape` - with values of type `self.dtype`. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.Chi2.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.Chi2.md new file mode 100644 index 0000000000..61ca5fb9d3 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.Chi2.md @@ -0,0 +1,260 @@ +The Chi2 distribution with degrees of freedom df. + +The PDF of this distribution is: + +```pdf(x) = (x^(df/2 - 1)e^(-x/2))/(2^(k/2)Gamma(k/2)), x > 0``` + +Note that the Chi2 distribution is a special case of the Gamma distribution, +with Chi2(df) = Gamma(df/2, 1/2). +- - - + +#### `tf.contrib.distributions.Chi2.__init__(df, name='Chi2')` {#Chi2.__init__} + + + + +- - - + +#### `tf.contrib.distributions.Chi2.alpha` {#Chi2.alpha} + +Shape parameter. + + +- - - + +#### `tf.contrib.distributions.Chi2.batch_shape(name='batch_shape')` {#Chi2.batch_shape} + +Batch dimensions of this instance as a 1-D int32 `Tensor`. + +The product of the dimensions of the `batch_shape` is the number of +independent distributions of this kind the instance represents. + +##### Args: + + +* `name`: name to give to the op + +##### Returns: + + `Tensor` `batch_shape` + + +- - - + +#### `tf.contrib.distributions.Chi2.beta` {#Chi2.beta} + +Inverse scale parameter. + + +- - - + +#### `tf.contrib.distributions.Chi2.cdf(x, name='cdf')` {#Chi2.cdf} + +CDF of observations `x` under these Gamma distribution(s). + +##### Args: + + +* `x`: tensor of dtype `dtype`, must be broadcastable with `alpha` and `beta`. +* `name`: The name to give this op. + +##### Returns: + + +* `cdf`: tensor of dtype `dtype`, the CDFs of `x`. + + +- - - + +#### `tf.contrib.distributions.Chi2.df` {#Chi2.df} + + + + +- - - + +#### `tf.contrib.distributions.Chi2.dtype` {#Chi2.dtype} + +dtype of samples from this distribution. + + +- - - + +#### `tf.contrib.distributions.Chi2.entropy(name='entropy')` {#Chi2.entropy} + +The entropy of Gamma distribution(s). + +This is defined to be + +``` +entropy = alpha - log(beta) + log(Gamma(alpha)) + + (1-alpha)digamma(alpha) +``` + +where digamma(alpha) is the digamma function. + +##### Args: + + +* `name`: The name to give this op. + +##### Returns: + + +* `entropy`: tensor of dtype `dtype`, the entropy. + + +- - - + +#### `tf.contrib.distributions.Chi2.event_shape(name='event_shape')` {#Chi2.event_shape} + +Shape of a sample from a single distribution as a 1-D int32 `Tensor`. + +##### Args: + + +* `name`: name to give to the op + +##### Returns: + + `Tensor` `event_shape` + + +- - - + +#### `tf.contrib.distributions.Chi2.get_batch_shape()` {#Chi2.get_batch_shape} + +`TensorShape` available at graph construction time. + +Same meaning as `batch_shape`. May be only partially defined. + +##### Returns: + + `TensorShape` object. + + +- - - + +#### `tf.contrib.distributions.Chi2.get_event_shape()` {#Chi2.get_event_shape} + +`TensorShape` available at graph construction time. + +Same meaning as `event_shape`. May be only partially defined. + +##### Returns: + + `TensorShape` object. + + +- - - + +#### `tf.contrib.distributions.Chi2.is_reparameterized` {#Chi2.is_reparameterized} + + + + +- - - + +#### `tf.contrib.distributions.Chi2.log_cdf(x, name='log_cdf')` {#Chi2.log_cdf} + +Log CDF of observations `x` under these Gamma distribution(s). + +##### Args: + + +* `x`: tensor of dtype `dtype`, must be broadcastable with `alpha` and `beta`. +* `name`: The name to give this op. + +##### Returns: + + +* `log_cdf`: tensor of dtype `dtype`, the log-CDFs of `x`. + + +- - - + +#### `tf.contrib.distributions.Chi2.log_pdf(x, name='log_pdf')` {#Chi2.log_pdf} + +Log pdf of observations in `x` under these Gamma distribution(s). + +##### Args: + + +* `x`: tensor of dtype `dtype`, must be broadcastable with `alpha` and `beta`. +* `name`: The name to give this op. + +##### Returns: + + +* `log_pdf`: tensor of dtype `dtype`, the log-PDFs of `x`. + +##### Raises: + + +* `TypeError`: if `x` and `alpha` are different dtypes. + + +- - - + +#### `tf.contrib.distributions.Chi2.mean` {#Chi2.mean} + +Mean of each batch member. + + +- - - + +#### `tf.contrib.distributions.Chi2.name` {#Chi2.name} + +Name to prepend to all ops. + + +- - - + +#### `tf.contrib.distributions.Chi2.pdf(x, name='pdf')` {#Chi2.pdf} + +Pdf of observations in `x` under these Gamma distribution(s). + +##### Args: + + +* `x`: tensor of dtype `dtype`, must be broadcastable with `alpha` and `beta`. +* `name`: The name to give this op. + +##### Returns: + + +* `pdf`: tensor of dtype `dtype`, the PDFs of `x` + +##### Raises: + + +* `TypeError`: if `x` and `alpha` are different dtypes. + + +- - - + +#### `tf.contrib.distributions.Chi2.sample(n, seed=None, name=None)` {#Chi2.sample} + +Generate `n` samples. + +##### Args: + + +* `n`: scalar. Number of samples to draw from each distribution. +* `seed`: Python integer seed for RNG +* `name`: name to give to the op. + +##### Returns: + + +* `samples`: a `Tensor` of shape `(n,) + self.batch_shape + self.event_shape` + with values of type `self.dtype`. + + +- - - + +#### `tf.contrib.distributions.Chi2.variance` {#Chi2.variance} + +Variance of each batch member. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.layers.l2_regularizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.layers.l2_regularizer.md deleted file mode 100644 index 9c3d06393b..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.layers.l2_regularizer.md +++ /dev/null @@ -1,22 +0,0 @@ -### `tf.contrib.layers.l2_regularizer(scale)` {#l2_regularizer} - -Returns a function that can be used to apply L2 regularization to weights. - -Small values of L2 can help prevent overfitting the training data. - -##### Args: - - -* `scale`: A scalar multiplier `Tensor`. 0.0 disables the regularizer. - -##### Returns: - - A function with signature `l2(weights, name=None)` that applies L2 - regularization. - -##### Raises: - - -* `ValueError`: If scale is outside of the range [0.0, 1.0] or if scale is not a - float. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.layers.optimize_loss.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.layers.optimize_loss.md new file mode 100644 index 0000000000..db0b01186a --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.layers.optimize_loss.md @@ -0,0 +1,43 @@ +### `tf.contrib.layers.optimize_loss(loss, global_step, learning_rate, optimizer, gradient_noise_scale=None, gradient_multipliers=None, clip_gradients=None, moving_average_decay=0.9, learning_rate_decay_fn=None, variables=None, name=None)` {#optimize_loss} + +Given loss and parameters for optimizer, returns a training op. + +##### Args: + + +* `loss`: Tensor, 0 dimensional. +* `global_step`: Tensor, step counter for each update. +* `learning_rate`: float or Tensor, magnitude of update per each training step. +* `optimizer`: string, class or optimizer instance, used as trainer. + string should be name of optimizer, like 'SGD', + 'Adam', 'Adagrad'. Full list in OPTIMIZER_CLS_NAMES constant. + class should be sub-class of tf.Optimizer that implements + `compute_gradients` and `apply_gradients` functions. + optimizer instance should be instantion of tf.Optimizer sub-class + and have `compute_gradients` and `apply_gradients` functions. +* `gradient_noise_scale`: float or None, adds 0-mean normal noise scaled by this + value. +* `gradient_multipliers`: dict of variables or variable names to floats. + If present, gradients for specified + variables will be multiplied by given constant. +* `clip_gradients`: float or `None`, clips gradients by this value. +* `moving_average_decay`: float or None, takes into account previous loss + to make learning smoother due to outliers. +* `learning_rate_decay_fn`: function, takes `learning_rate` and `global_step` + `Tensor`s, returns `Tensor`. + Can be used to implement any learning rate decay + functions. + For example: tf.train.exponential_decay. +* `variables`: list of variables to optimize or + `None` to use all trainable variables. +* `name`: The name for this operation is used to scope operations and summaries. + +##### Returns: + + Training op. + +##### Raises: + + +* `ValueError`: if optimizer is wrong type. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.layers.xavier_initializer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.layers.xavier_initializer.md new file mode 100644 index 0000000000..55631e4b05 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.layers.xavier_initializer.md @@ -0,0 +1,29 @@ +### `tf.contrib.layers.xavier_initializer(uniform=True, seed=None, dtype=tf.float32)` {#xavier_initializer} + +Returns an initializer performing "Xavier" initialization for weights. + +This function implements the weight initialization from: + +Xavier Glorot and Yoshua Bengio (2010): + Understanding the difficulty of training deep feedforward neural + networks. International conference on artificial intelligence and + statistics. + +This initializer is designed to keep the scale of the gradients roughly the +same in all layers. In uniform distribution this ends up being the range: +`x = sqrt(6. / (in + out)); [-x, x]` and for normal distribution a standard +deviation of `sqrt(3. / (in + out))` is used. + +##### Args: + + +* `uniform`: Whether to use uniform or normal distributed random initialization. +* `seed`: A Python integer. Used to create random seeds. See + [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) + for behavior. +* `dtype`: The data type. Only floating point types are supported. + +##### Returns: + + An initializer for a weight matrix. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.BaseEstimator.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.BaseEstimator.md deleted file mode 100644 index 034af231a1..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.BaseEstimator.md +++ /dev/null @@ -1,189 +0,0 @@ -Abstract BaseEstimator class to train and evaluate TensorFlow models. - -Concrete implementation of this class should provide following functions: - * _get_train_ops - * _get_eval_ops - * _get_predict_ops -It may override _get_default_metric_functions. - -`Estimator` implemented below is a good example of how to use this class. - -Parameters: - model_dir: Directory to save model parameters, graph and etc. -- - - - -#### `tf.contrib.learn.BaseEstimator.__init__(model_dir=None, config=None)` {#BaseEstimator.__init__} - - - - -- - - - -#### `tf.contrib.learn.BaseEstimator.evaluate(x=None, y=None, input_fn=None, feed_fn=None, batch_size=32, steps=None, metrics=None, name=None)` {#BaseEstimator.evaluate} - -Evaluates given model with provided evaluation data. - -##### Args: - - -* `x`: features. -* `y`: targets. -* `input_fn`: Input function. If set, x and y must be None. -* `feed_fn`: Function creating a feed dict every time it is called. Called - once per iteration. -* `batch_size`: minibatch size to use on the input, defaults to 32. Ignored - if input_fn is set. -* `steps`: Number of steps to evalute for. -* `metrics`: Dict of metric ops to run. If None, the default metric functions - are used; if {}, no metrics are used. -* `name`: Name of the evaluation if user needs to run multiple evaluation on - different data sets, such as evaluate on training data vs test data. - -##### Returns: - - Returns self. - -##### Raises: - - -* `ValueError`: If x or y are not None while input_fn or feed_fn is not None. - - -- - - - -#### `tf.contrib.learn.BaseEstimator.fit(x, y, steps, batch_size=32, monitors=None)` {#BaseEstimator.fit} - -Trains a model given training data X and y. - -##### Args: - - -* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be - iterator that returns arrays of features. The training input - samples for fitting the model. -* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be - iterator that returns array of targets. The training target values - (class labels in classification, real numbers in regression). -* `steps`: number of steps to train model for. -* `batch_size`: minibatch size to use on the input, defaults to 32. -* `monitors`: List of `BaseMonitor` subclass instances. Used for callbacks - inside the training loop. - -##### Returns: - - Returns self. - - -- - - - -#### `tf.contrib.learn.BaseEstimator.get_params(deep=True)` {#BaseEstimator.get_params} - -Get parameters for this estimator. - -##### Args: - - -* `deep`: boolean, optional - If True, will return the parameters for this estimator and - contained subobjects that are estimators. - -##### Returns: - - params : mapping of string to any - Parameter names mapped to their values. - - -- - - - -#### `tf.contrib.learn.BaseEstimator.model_dir` {#BaseEstimator.model_dir} - - - - -- - - - -#### `tf.contrib.learn.BaseEstimator.partial_fit(x, y, steps=1, batch_size=32, monitors=None)` {#BaseEstimator.partial_fit} - -Incremental fit on a batch of samples. - -This method is expected to be called several times consecutively -on different or the same chunks of the dataset. This either can -implement iterative training or out-of-core/online training. - -This is especially useful when the whole dataset is too big to -fit in memory at the same time. Or when model is taking long time -to converge, and you want to split up training into subparts. - -##### Args: - - -* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be - iterator that returns arrays of features. The training input - samples for fitting the model. -* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be - iterator that returns array of targets. The training target values - (class label in classification, real numbers in regression). -* `steps`: number of steps to train model for. -* `batch_size`: minibatch size to use on the input, defaults to 32. -* `monitors`: List of `BaseMonitor` subclass instances. Used for callbacks - inside the training loop. - -##### Returns: - - Returns self. - - -- - - - -#### `tf.contrib.learn.BaseEstimator.predict(x=None, input_fn=None, batch_size=None)` {#BaseEstimator.predict} - -Returns predictions for given features. - -##### Args: - - -* `x`: features. -* `input_fn`: Input function. If set, x must be None. -* `batch_size`: Override default batch size. - -##### Returns: - - Numpy array of predicted classes or regression values. - - -- - - - -#### `tf.contrib.learn.BaseEstimator.set_params(**params)` {#BaseEstimator.set_params} - -Set the parameters of this estimator. - -The method works on simple estimators as well as on nested objects -(such as pipelines). The former have parameters of the form -``__`` so that it's possible to update each -component of a nested object. - -##### Returns: - - self - - -- - - - -#### `tf.contrib.learn.BaseEstimator.train(input_fn, steps, monitors=None)` {#BaseEstimator.train} - -Trains a model given input builder function. - -##### Args: - - -* `input_fn`: Input builder function, returns tuple of dicts or - dict and Tensor. -* `steps`: number of steps to train model for. -* `monitors`: List of `BaseMonitor` subclass instances. Used for callbacks - inside the training loop. - -##### Returns: - - Returns self. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.TensorFlowClassifier.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.TensorFlowClassifier.md deleted file mode 100644 index 63588166ec..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.TensorFlowClassifier.md +++ /dev/null @@ -1,279 +0,0 @@ -TensorFlow Linear Classifier model. -- - - - -#### `tf.contrib.learn.TensorFlowClassifier.__init__(n_classes, batch_size=32, steps=200, optimizer='Adagrad', learning_rate=0.1, class_weight=None, clip_gradients=5.0, continue_training=False, config=None, verbose=1)` {#TensorFlowClassifier.__init__} - - - - -- - - - -#### `tf.contrib.learn.TensorFlowClassifier.bias_` {#TensorFlowClassifier.bias_} - -Returns weights of the linear classifier. - - -- - - - -#### `tf.contrib.learn.TensorFlowClassifier.evaluate(x=None, y=None, input_fn=None, steps=None)` {#TensorFlowClassifier.evaluate} - -See base class. - - -- - - - -#### `tf.contrib.learn.TensorFlowClassifier.fit(x, y, steps=None, monitors=None, logdir=None)` {#TensorFlowClassifier.fit} - -Builds a neural network model given provided `model_fn` and training -data X and y. - -Note: called first time constructs the graph and initializers -variables. Consecutives times it will continue training the same model. -This logic follows partial_fit() interface in scikit-learn. - -To restart learning, create new estimator. - -##### Args: - - -* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be - iterator that returns arrays of features. The training input - samples for fitting the model. - -* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be - iterator that returns array of targets. The training target values - (class labels in classification, real numbers in regression). - -* `steps`: int, number of steps to train. - If None or 0, train for `self.steps`. -* `monitors`: List of `BaseMonitor` objects to print training progress and - invoke early stopping. -* `logdir`: the directory to save the log file that can be used for - optional visualization. - -##### Returns: - - Returns self. - - -- - - - -#### `tf.contrib.learn.TensorFlowClassifier.get_params(deep=True)` {#TensorFlowClassifier.get_params} - -Get parameters for this estimator. - -##### Args: - - -* `deep`: boolean, optional - If True, will return the parameters for this estimator and - contained subobjects that are estimators. - -##### Returns: - - params : mapping of string to any - Parameter names mapped to their values. - - -- - - - -#### `tf.contrib.learn.TensorFlowClassifier.get_tensor(name)` {#TensorFlowClassifier.get_tensor} - -Returns tensor by name. - -##### Args: - - -* `name`: string, name of the tensor. - -##### Returns: - - Tensor. - - -- - - - -#### `tf.contrib.learn.TensorFlowClassifier.get_tensor_value(name)` {#TensorFlowClassifier.get_tensor_value} - -Returns value of the tensor give by name. - -##### Args: - - -* `name`: string, name of the tensor. - -##### Returns: - - Numpy array - value of the tensor. - - -- - - - -#### `tf.contrib.learn.TensorFlowClassifier.get_variable_names()` {#TensorFlowClassifier.get_variable_names} - -Returns list of all variable names in this model. - -##### Returns: - - List of names. - - -- - - - -#### `tf.contrib.learn.TensorFlowClassifier.model_dir` {#TensorFlowClassifier.model_dir} - - - - -- - - - -#### `tf.contrib.learn.TensorFlowClassifier.partial_fit(x, y)` {#TensorFlowClassifier.partial_fit} - -Incremental fit on a batch of samples. - -This method is expected to be called several times consecutively -on different or the same chunks of the dataset. This either can -implement iterative training or out-of-core/online training. - -This is especially useful when the whole dataset is too big to -fit in memory at the same time. Or when model is taking long time -to converge, and you want to split up training into subparts. - -##### Args: - - -* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be - iterator that returns arrays of features. The training input - samples for fitting the model. - -* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be - iterator that returns array of targets. The training target values - (class label in classification, real numbers in regression). - -##### Returns: - - Returns self. - - -- - - - -#### `tf.contrib.learn.TensorFlowClassifier.predict(x, axis=1, batch_size=None)` {#TensorFlowClassifier.predict} - -Predict class or regression for X. - -For a classification model, the predicted class for each sample in X is -returned. For a regression model, the predicted value based on X is -returned. - -##### Args: - - -* `x`: array-like matrix, [n_samples, n_features...] or iterator. -* `axis`: Which axis to argmax for classification. - By default axis 1 (next after batch) is used. - Use 2 for sequence predictions. -* `batch_size`: If test set is too big, use batch size to split - it into mini batches. By default the batch_size member - variable is used. - -##### Returns: - - -* `y`: array of shape [n_samples]. The predicted classes or predicted - value. - - -- - - - -#### `tf.contrib.learn.TensorFlowClassifier.predict_proba(x, batch_size=None)` {#TensorFlowClassifier.predict_proba} - -Predict class probability of the input samples X. - -##### Args: - - -* `x`: array-like matrix, [n_samples, n_features...] or iterator. -* `batch_size`: If test set is too big, use batch size to split - it into mini batches. By default the batch_size member variable is used. - -##### Returns: - - -* `y`: array of shape [n_samples, n_classes]. The predicted - probabilities for each class. - - -- - - - -#### `tf.contrib.learn.TensorFlowClassifier.restore(cls, path, config=None)` {#TensorFlowClassifier.restore} - -Restores model from give path. - -##### Args: - - -* `path`: Path to the checkpoints and other model information. -* `config`: RunConfig object that controls the configurations of the session, - e.g. num_cores, gpu_memory_fraction, etc. This is allowed to be - reconfigured. - -##### Returns: - - Estimator, object of the subclass of TensorFlowEstimator. - - -- - - - -#### `tf.contrib.learn.TensorFlowClassifier.save(path)` {#TensorFlowClassifier.save} - -Saves checkpoints and graph to given path. - -##### Args: - - -* `path`: Folder to save model to. - - -- - - - -#### `tf.contrib.learn.TensorFlowClassifier.set_params(**params)` {#TensorFlowClassifier.set_params} - -Set the parameters of this estimator. - -The method works on simple estimators as well as on nested objects -(such as pipelines). The former have parameters of the form -``__`` so that it's possible to update each -component of a nested object. - -##### Returns: - - self - - -- - - - -#### `tf.contrib.learn.TensorFlowClassifier.train(input_fn, steps, monitors=None)` {#TensorFlowClassifier.train} - -Trains a model given input builder function. - -##### Args: - - -* `input_fn`: Input builder function, returns tuple of dicts or - dict and Tensor. -* `steps`: number of steps to train model for. -* `monitors`: List of `BaseMonitor` subclass instances. Used for callbacks - inside the training loop. - -##### Returns: - - Returns self. - - -- - - - -#### `tf.contrib.learn.TensorFlowClassifier.weights_` {#TensorFlowClassifier.weights_} - -Returns weights of the linear classifier. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.TensorFlowRNNClassifier.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.TensorFlowRNNClassifier.md new file mode 100644 index 0000000000..130b2706de --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.TensorFlowRNNClassifier.md @@ -0,0 +1,312 @@ +TensorFlow RNN Classifier model. + +Parameters: + rnn_size: The size for rnn cell, e.g. size of your word embeddings. + cell_type: The type of rnn cell, including rnn, gru, and lstm. + num_layers: The number of layers of the rnn model. + input_op_fn: Function that will transform the input tensor, such as + creating word embeddings, byte list, etc. This takes + an argument X for input and returns transformed X. + bidirectional: boolean, Whether this is a bidirectional rnn. + sequence_length: If sequence_length is provided, dynamic calculation is + performed. This saves computational time when unrolling past max sequence + length. + initial_state: An initial state for the RNN. This must be a tensor of + appropriate type and shape [batch_size x cell.state_size]. + n_classes: Number of classes in the target. + batch_size: Mini batch size. + steps: Number of steps to run over data. + optimizer: Optimizer name (or class), for example "SGD", "Adam", "Adagrad". + learning_rate: If this is constant float value, no decay function is + used. Instead, a customized decay function can be passed that accepts + global_step as parameter and returns a Tensor. + e.g. exponential decay function: + def exp_decay(global_step): + return tf.train.exponential_decay( + learning_rate=0.1, global_step, + decay_steps=2, decay_rate=0.001) + class_weight: None or list of n_classes floats. Weight associated with + classes for loss computation. If not given, all classes are + supposed to have weight one. + continue_training: when continue_training is True, once initialized + model will be continuely trained on every call of fit. + config: RunConfig object that controls the configurations of the session, + e.g. num_cores, gpu_memory_fraction, etc. +- - - + +#### `tf.contrib.learn.TensorFlowRNNClassifier.__init__(rnn_size, n_classes, cell_type='gru', num_layers=1, input_op_fn=null_input_op_fn, initial_state=None, bidirectional=False, sequence_length=None, batch_size=32, steps=50, optimizer='Adagrad', learning_rate=0.1, class_weight=None, clip_gradients=5.0, continue_training=False, config=None, verbose=1)` {#TensorFlowRNNClassifier.__init__} + + + + +- - - + +#### `tf.contrib.learn.TensorFlowRNNClassifier.bias_` {#TensorFlowRNNClassifier.bias_} + +Returns bias of the rnn layer. + + +- - - + +#### `tf.contrib.learn.TensorFlowRNNClassifier.evaluate(x=None, y=None, input_fn=None, steps=None)` {#TensorFlowRNNClassifier.evaluate} + +See base class. + + +- - - + +#### `tf.contrib.learn.TensorFlowRNNClassifier.fit(x, y, steps=None, monitors=None, logdir=None)` {#TensorFlowRNNClassifier.fit} + +Builds a neural network model given provided `model_fn` and training +data X and y. + +Note: called first time constructs the graph and initializers +variables. Consecutives times it will continue training the same model. +This logic follows partial_fit() interface in scikit-learn. + +To restart learning, create new estimator. + +##### Args: + + +* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be + iterator that returns arrays of features. The training input + samples for fitting the model. + +* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be + iterator that returns array of targets. The training target values + (class labels in classification, real numbers in regression). + +* `steps`: int, number of steps to train. + If None or 0, train for `self.steps`. +* `monitors`: List of `BaseMonitor` objects to print training progress and + invoke early stopping. +* `logdir`: the directory to save the log file that can be used for + optional visualization. + +##### Returns: + + Returns self. + + +- - - + +#### `tf.contrib.learn.TensorFlowRNNClassifier.get_params(deep=True)` {#TensorFlowRNNClassifier.get_params} + +Get parameters for this estimator. + +##### Args: + + +* `deep`: boolean, optional + If True, will return the parameters for this estimator and + contained subobjects that are estimators. + +##### Returns: + + params : mapping of string to any + Parameter names mapped to their values. + + +- - - + +#### `tf.contrib.learn.TensorFlowRNNClassifier.get_tensor(name)` {#TensorFlowRNNClassifier.get_tensor} + +Returns tensor by name. + +##### Args: + + +* `name`: string, name of the tensor. + +##### Returns: + + Tensor. + + +- - - + +#### `tf.contrib.learn.TensorFlowRNNClassifier.get_tensor_value(name)` {#TensorFlowRNNClassifier.get_tensor_value} + +Returns value of the tensor give by name. + +##### Args: + + +* `name`: string, name of the tensor. + +##### Returns: + + Numpy array - value of the tensor. + + +- - - + +#### `tf.contrib.learn.TensorFlowRNNClassifier.get_variable_names()` {#TensorFlowRNNClassifier.get_variable_names} + +Returns list of all variable names in this model. + +##### Returns: + + List of names. + + +- - - + +#### `tf.contrib.learn.TensorFlowRNNClassifier.model_dir` {#TensorFlowRNNClassifier.model_dir} + + + + +- - - + +#### `tf.contrib.learn.TensorFlowRNNClassifier.partial_fit(x, y)` {#TensorFlowRNNClassifier.partial_fit} + +Incremental fit on a batch of samples. + +This method is expected to be called several times consecutively +on different or the same chunks of the dataset. This either can +implement iterative training or out-of-core/online training. + +This is especially useful when the whole dataset is too big to +fit in memory at the same time. Or when model is taking long time +to converge, and you want to split up training into subparts. + +##### Args: + + +* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be + iterator that returns arrays of features. The training input + samples for fitting the model. + +* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be + iterator that returns array of targets. The training target values + (class label in classification, real numbers in regression). + +##### Returns: + + Returns self. + + +- - - + +#### `tf.contrib.learn.TensorFlowRNNClassifier.predict(x, axis=1, batch_size=None)` {#TensorFlowRNNClassifier.predict} + +Predict class or regression for X. + +For a classification model, the predicted class for each sample in X is +returned. For a regression model, the predicted value based on X is +returned. + +##### Args: + + +* `x`: array-like matrix, [n_samples, n_features...] or iterator. +* `axis`: Which axis to argmax for classification. + By default axis 1 (next after batch) is used. + Use 2 for sequence predictions. +* `batch_size`: If test set is too big, use batch size to split + it into mini batches. By default the batch_size member + variable is used. + +##### Returns: + + +* `y`: array of shape [n_samples]. The predicted classes or predicted + value. + + +- - - + +#### `tf.contrib.learn.TensorFlowRNNClassifier.predict_proba(x, batch_size=None)` {#TensorFlowRNNClassifier.predict_proba} + +Predict class probability of the input samples X. + +##### Args: + + +* `x`: array-like matrix, [n_samples, n_features...] or iterator. +* `batch_size`: If test set is too big, use batch size to split + it into mini batches. By default the batch_size member variable is used. + +##### Returns: + + +* `y`: array of shape [n_samples, n_classes]. The predicted + probabilities for each class. + + +- - - + +#### `tf.contrib.learn.TensorFlowRNNClassifier.restore(cls, path, config=None)` {#TensorFlowRNNClassifier.restore} + +Restores model from give path. + +##### Args: + + +* `path`: Path to the checkpoints and other model information. +* `config`: RunConfig object that controls the configurations of the session, + e.g. num_cores, gpu_memory_fraction, etc. This is allowed to be + reconfigured. + +##### Returns: + + Estimator, object of the subclass of TensorFlowEstimator. + + +- - - + +#### `tf.contrib.learn.TensorFlowRNNClassifier.save(path)` {#TensorFlowRNNClassifier.save} + +Saves checkpoints and graph to given path. + +##### Args: + + +* `path`: Folder to save model to. + + +- - - + +#### `tf.contrib.learn.TensorFlowRNNClassifier.set_params(**params)` {#TensorFlowRNNClassifier.set_params} + +Set the parameters of this estimator. + +The method works on simple estimators as well as on nested objects +(such as pipelines). The former have parameters of the form +``__`` so that it's possible to update each +component of a nested object. + +##### Returns: + + self + + +- - - + +#### `tf.contrib.learn.TensorFlowRNNClassifier.train(input_fn, steps, monitors=None)` {#TensorFlowRNNClassifier.train} + +Trains a model given input builder function. + +##### Args: + + +* `input_fn`: Input builder function, returns tuple of dicts or + dict and Tensor. +* `steps`: number of steps to train model for. +* `monitors`: List of `BaseMonitor` subclass instances. Used for callbacks + inside the training loop. + +##### Returns: + + Returns self. + + +- - - + +#### `tf.contrib.learn.TensorFlowRNNClassifier.weights_` {#TensorFlowRNNClassifier.weights_} + +Returns weights of the rnn layer. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.evaluate.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.evaluate.md new file mode 100644 index 0000000000..022662c3f6 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.evaluate.md @@ -0,0 +1,44 @@ +### `tf.contrib.learn.evaluate(graph, output_dir, checkpoint_path, eval_dict, update_op=None, global_step_tensor=None, supervisor_master='', log_every_steps=10, feed_fn=None, max_steps=None)` {#evaluate} + +Evaluate a model loaded from a checkpoint. + +Given `graph`, a directory to write summaries to (`output_dir`), a checkpoint +to restore variables from, and a `dict` of `Tensor`s to evaluate, run an eval +loop for `max_steps` steps. + +In each step of evaluation, all tensors in the `eval_dict` are evaluated, and +every `log_every_steps` steps, they are logged. At the very end of evaluation, +a summary is evaluated (finding the summary ops using `Supervisor`'s logic) +and written to `output_dir`. + +##### Args: + + +* `graph`: A `Graph` to train. It is expected that this graph is not in use + elsewhere. +* `output_dir`: A string containing the directory to write a summary to. +* `checkpoint_path`: A string containing the path to a checkpoint to restore. + Can be `None` if the graph doesn't require loading any variables. +* `eval_dict`: A `dict` mapping string names to tensors to evaluate. It is + evaluated in every logging step. The result of the final evaluation is + returned. If update_op is None, then it's evaluated in every step. +* `update_op`: A `Tensor` which is run in every step. +* `global_step_tensor`: A `Variable` containing the global step. If `None`, + one is extracted from the graph using the same logic as in `Supervisor`. + Used to place eval summaries on training curves. +* `supervisor_master`: The master string to use when preparing the session. +* `log_every_steps`: Integer. Output logs every `log_every_steps` evaluation + steps. The logs contain the `eval_dict` and timing information. +* `feed_fn`: A function that is called every iteration to produce a `feed_dict` + passed to `session.run` calls. Optional. +* `max_steps`: Integer. Evaluate `eval_dict` this many times. + +##### Returns: + + A tuple `(eval_results, global_step)`: + +* `eval_results`: A `dict` mapping `string` to numeric values (`int`, `float`) + that are the result of running eval_dict in the last step. `None` if no + eval steps were run. +* `global_step`: The global step this evaluation corresponds to. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.extract_dask_data.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.extract_dask_data.md new file mode 100644 index 0000000000..a14a51ff56 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.extract_dask_data.md @@ -0,0 +1,4 @@ +### `tf.contrib.learn.extract_dask_data(data)` {#extract_dask_data} + +Extract data from dask.Series or dask.DataFrame for predictors + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.extract_pandas_data.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.extract_pandas_data.md new file mode 100644 index 0000000000..82703a8097 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.extract_pandas_data.md @@ -0,0 +1,4 @@ +### `tf.contrib.learn.extract_pandas_data(data)` {#extract_pandas_data} + +Extract data from pandas.DataFrame for predictors + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.read_batch_examples.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.read_batch_examples.md deleted file mode 100644 index c5cec0542a..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.read_batch_examples.md +++ /dev/null @@ -1,39 +0,0 @@ -### `tf.contrib.learn.read_batch_examples(file_pattern, batch_size, reader, randomize_input=True, num_epochs=None, queue_capacity=10000, num_threads=1, name=None)` {#read_batch_examples} - -Adds operations to read, queue, batch `Example` protos. - -Given file pattern (or list of files), will setup a queue for file names, -read `Example` proto using provided `reader`, use batch queue to create -batches of examples of size `batch_size`. - -All queue runners are added to the queue runners collection, and may be -started via `start_queue_runners`. - -All ops are added to the default graph. - -##### Args: - - -* `file_pattern`: List of files or pattern of file paths containing - `Example` records. See `tf.gfile.Glob` for pattern rules. -* `batch_size`: An int or scalar `Tensor` specifying the batch size to use. -* `reader`: A function or class that returns an object with - `read` method, (filename tensor) -> (example tensor). -* `randomize_input`: Whether the input should be randomized. -* `num_epochs`: Integer specifying the number of times to read through the - dataset. If `None`, cycles through the dataset forever. - NOTE - If specified, creates a variable that must be initialized, so call - `tf.initialize_all_variables()` as shown in the tests. -* `queue_capacity`: Capacity for input queue. -* `num_threads`: The number of threads enqueuing examples. -* `name`: Name of resulting op. - -##### Returns: - - String `Tensor` of batched `Example` proto. - -##### Raises: - - -* `ValueError`: for invalid inputs. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.run_feeds.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.run_feeds.md deleted file mode 100644 index f5c3e977d0..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.run_feeds.md +++ /dev/null @@ -1,28 +0,0 @@ -### `tf.contrib.learn.run_feeds(output_dict, feed_dicts, restore_checkpoint_path=None)` {#run_feeds} - -Run `output_dict` tensors with each input in `feed_dicts`. - -If `checkpoint_path` is supplied, restore from checkpoint. Otherwise, init all -variables. - -##### Args: - - -* `output_dict`: A `dict` mapping string names to `Tensor` objects to run. - Tensors must all be from the same graph. -* `feed_dicts`: Iterable of `dict` objects of input values to feed. -* `restore_checkpoint_path`: A string containing the path to a checkpoint to - restore. - -##### Returns: - - A list of dicts of values read from `output_dict` tensors, one item in the - list for each item in `feed_dicts`. Keys are the same as `output_dict`, - values are the results read from the corresponding `Tensor` in - `output_dict`. - -##### Raises: - - -* `ValueError`: if `output_dict` or `feed_dicts` is None or empty. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.train.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.train.md new file mode 100644 index 0000000000..65057636ce --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.train.md @@ -0,0 +1,62 @@ +### `tf.contrib.learn.train(graph, output_dir, train_op, loss_op, global_step_tensor=None, init_op=None, init_feed_dict=None, init_fn=None, log_every_steps=10, supervisor_is_chief=True, supervisor_master='', supervisor_save_model_secs=600, supervisor_save_summaries_steps=100, feed_fn=None, max_steps=None, fail_on_nan_loss=True, monitors=None)` {#train} + +Train a model. + +Given `graph`, a directory to write outputs to (`output_dir`), and some ops, +run a training loop. The given `train_op` performs one step of training on the +model. The `loss_op` represents the objective function of the training. It is +expected to increment the `global_step_tensor`, a scalar integer tensor +counting training steps. This function uses `Supervisor` to initialize the +graph (from a checkpoint if one is available in `output_dir`), write summaries +defined in the graph, and write regular checkpoints as defined by +`supervisor_save_model_secs`. + +Training continues until `global_step_tensor` evaluates to `max_steps`, or, if +`fail_on_nan_loss`, until `loss_op` evaluates to `NaN`. In that case the +program is terminated with exit code 1. + +##### Args: + + +* `graph`: A graph to train. It is expected that this graph is not in use + elsewhere. +* `output_dir`: A directory to write outputs to. +* `train_op`: An op that performs one training step when run. +* `loss_op`: A scalar loss tensor. +* `global_step_tensor`: A tensor representing the global step. If none is given, + one is extracted from the graph using the same logic as in `Supervisor`. +* `init_op`: An op that initializes the graph. If `None`, use `Supervisor`'s + default. +* `init_feed_dict`: A dictionary that maps `Tensor` objects to feed values. + This feed dictionary will be used when `init_op` is evaluated. +* `init_fn`: Optional callable passed to Supervisor to initialize the model. +* `log_every_steps`: Output logs regularly. The logs contain timing data and the + current loss. +* `supervisor_is_chief`: Whether the current process is the chief supervisor in + charge of restoring the model and running standard services. +* `supervisor_master`: The master string to use when preparing the session. +* `supervisor_save_model_secs`: Save a checkpoint every + `supervisor_save_model_secs` seconds when training. +* `supervisor_save_summaries_steps`: Save summaries every + `supervisor_save_summaries_steps` seconds when training. +* `feed_fn`: A function that is called every iteration to produce a `feed_dict` + passed to `session.run` calls. Optional. +* `max_steps`: Train until `global_step_tensor` evaluates to this value. +* `fail_on_nan_loss`: If true, raise `NanLossDuringTrainingError` if `loss_op` + evaluates to `NaN`. If false, continue training as if nothing happened. +* `monitors`: List of `BaseMonitor` subclass instances. Used for callbacks + inside the training loop. + +##### Returns: + + The final loss value. + +##### Raises: + + +* `ValueError`: If `global_step_tensor` is not provided. See + `tf.contrib.framework.get_global_step` for how we look it up if not + provided explicitly. +* `NanLossDuringTrainingError`: If `fail_on_nan_loss` is `True`, and loss ever + evaluates to `NaN`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.metrics.streaming_accuracy.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.metrics.streaming_accuracy.md new file mode 100644 index 0000000000..684d1849d3 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.metrics.streaming_accuracy.md @@ -0,0 +1,51 @@ +### `tf.contrib.metrics.streaming_accuracy(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_accuracy} + +Calculates how often `predictions` matches `labels`. + +The `streaming_accuracy` function creates two local variables, `total` and +`count` that are used to compute the frequency with which `predictions` +matches `labels`. This frequency is ultimately returned as `accuracy`: an +idempotent operation that simply divides `total` by `count`. +To facilitate the estimation of the accuracy over a stream of data, the +function utilizes two operations. First, an `is_correct` operation that +computes a tensor whose shape matches `predictions` and whose elements are +set to 1.0 when the corresponding values of `predictions` and `labels match +and 0.0 otherwise. Second, an `update_op` operation whose behavior is +dependent on the value of `weights`. If `weights` is None, then `update_op` +increments `total` with the number of elements of `predictions` that match +`labels` and increments `count` with the number of elements in `values`. If +`weights` is not `None`, then `update_op` increments `total` with the reduced +sum of the product of `weights` and `is_correct` and increments `count` with +the reduced sum of `weights`. In addition to performing the updates, +`update_op` also returns the `accuracy` value. + +##### Args: + + +* `predictions`: The predicted values, a `Tensor` of any shape. +* `labels`: The ground truth values, a `Tensor` whose shape matches + `predictions`. +* `weights`: An optional set of weights whose shape matches `predictions` + which, when not `None`, produces a weighted mean accuracy. +* `metrics_collections`: An optional list of collections that `accuracy` should + be added to. +* `updates_collections`: An optional list of collections that `update_op` should + be added to. +* `name`: An optional variable_op_scope name. + +##### Returns: + + +* `accuracy`: A tensor representing the accuracy, the value of `total` divided + by `count`. +* `update_op`: An operation that increments the `total` and `count` variables + appropriately and whose value matches `accuracy`. + +##### Raises: + + +* `ValueError`: If the dimensions of `predictions` and `labels` don't match or + if `weight` is not `None` and its shape doesn't match `predictions` or + if either `metrics_collections` or `updates_collections` are not + a list or tuple. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.metrics.streaming_mean_absolute_error.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.metrics.streaming_mean_absolute_error.md new file mode 100644 index 0000000000..b4ecd6e916 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.metrics.streaming_mean_absolute_error.md @@ -0,0 +1,48 @@ +### `tf.contrib.metrics.streaming_mean_absolute_error(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_mean_absolute_error} + +Computes the mean absolute error between the labels and predictions. + +The `streaming_mean_absolute_error` function creates two local variables, +`total` and `count` that are used to compute the mean absolute error. This +average is ultimately returned as `mean_absolute_error`: an idempotent +operation that simply divides `total` by `count`. To facilitate the estimation +of the mean absolute error over a stream of data, the function utilizes two +operations. First, an `absolute_errors` operation computes the absolute value +of the differences between `predictions` and `labels`. Second, an `update_op` +operation whose behavior is dependent on the value of `weights`. If `weights` +is None, then `update_op` increments `total` with the reduced sum of +`absolute_errors` and increments `count` with the number of elements in +`absolute_errors`. If `weights` is not `None`, then `update_op` increments +`total` with the reduced sum of the product of `weights` and `absolute_errors` +and increments `count` with the reduced sum of `weights`. In addition to +performing the updates, `update_op` also returns the `mean_absolute_error` +value. + +##### Args: + + +* `predictions`: A `Tensor` of arbitrary shape. +* `labels`: A `Tensor` of the same shape as `predictions`. +* `weights`: An optional set of weights of the same shape as `predictions`. If + `weights` is not None, the function computes a weighted mean. +* `metrics_collections`: An optional list of collections that + `mean_absolute_error` should be added to. +* `updates_collections`: An optional list of collections that `update_op` should + be added to. +* `name`: An optional variable_op_scope name. + +##### Returns: + + +* `mean_absolute_error`: A tensor representing the current mean, the value of + `total` divided by `count`. +* `update_op`: An operation that increments the `total` and `count` variables + appropriately and whose value matches `mean_absolute_error`. + +##### Raises: + + +* `ValueError`: If `weights` is not `None` and its shape doesn't match + `predictions` or if either `metrics_collections` or `updates_collections` + are not a list or tuple. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.metrics.streaming_sparse_precision_at_k.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.metrics.streaming_sparse_precision_at_k.md deleted file mode 100644 index ad24dd742a..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.metrics.streaming_sparse_precision_at_k.md +++ /dev/null @@ -1,60 +0,0 @@ -### `tf.contrib.metrics.streaming_sparse_precision_at_k(predictions, labels, k, class_id=None, ignore_mask=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_sparse_precision_at_k} - -Computes precision@k of the predictions with respect to sparse labels. - -If `class_id` is specified, we calculate precision by considering only the - entries in the batch for which `class_id` is in the top-k highest - `predictions`, and computing the fraction of them for which `class_id` is - indeed a correct label. -If `class_id` is not specified, we'll calculate precision as how often on - average a class among the top-k classes with the highest predicted values - of a batch entry is correct and can be found in the label for that entry. - -`streaming_sparse_precision_at_k` creates two local variables, -`true_positive_at_` and `false_positive_at_`, that are used to compute -the precision@k frequency. This frequency is ultimately returned as -`recall_at_`: an idempotent operation that simply divides -`true_positive_at_` by total (`true_positive_at_` + `recall_at_`). To -facilitate the estimation of precision@k over a stream of data, the function -utilizes three steps. -* A `top_k` operation computes a tensor whose elements indicate the top `k` - predictions of the `predictions` `Tensor`. -* Set operations are applied to `top_k` and `labels` to calculate true - positives and false positives. -* An `update_op` operation increments `true_positive_at_` and - `false_positive_at_`. It also returns the recall value. - -##### Args: - - -* `predictions`: Float `Tensor` with shape [D1, ... DN, num_classes] where - N >= 1. Commonly, N=1 and predictions has shape [batch size, num_classes]. - The final dimension contains the logit values for each class. [D1, ... DN] - must match `labels`. -* `labels`: `int64` `Tensor` or `SparseTensor` with shape - [D1, ... DN, num_labels], where N >= 1 and num_labels is the number of - target classes for the associated prediction. Commonly, N=1 and `labels` - has shape [batch_size, num_labels]. [D1, ... DN] must match - `predictions_idx`. Values should be in range [0, num_classes], where - num_classes is the last dimension of `predictions`. -* `k`: Integer, k for @k metric. -* `class_id`: Integer class ID for which we want binary metrics. This should be - in range [0, num_classes], where num_classes is the last dimension of - `predictions`. -* `ignore_mask`: An optional, binary tensor whose shape is broadcastable to the - the first [D1, ... DN] dimensions of `predictions_idx` and `labels`. -* `metrics_collections`: An optional list of collections that values should - be added to. -* `updates_collections`: An optional list of collections that updates should - be added to. -* `name`: Name of new update operation, and namespace for other dependant ops. - -##### Returns: - - -* `precision`: Scalar `float64` `Tensor` with the value of `true_positives` - divided by the sum of `true_positives` and `false_positives`. -* `update_op`: `Operation` that increments `true_positives` and - `false_positives` variables appropriately, and whose value matches - `precision`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.metrics.streaming_sparse_recall_at_k.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.metrics.streaming_sparse_recall_at_k.md deleted file mode 100644 index d09b288089..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.metrics.streaming_sparse_recall_at_k.md +++ /dev/null @@ -1,59 +0,0 @@ -### `tf.contrib.metrics.streaming_sparse_recall_at_k(predictions, labels, k, class_id=None, ignore_mask=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_sparse_recall_at_k} - -Computes recall@k of the predictions with respect to sparse labels. - -If `class_id` is specified, we calculate recall by considering only the - entries in the batch for which `class_id` is in the label, and computing - the fraction of them for which `class_id` is in the top-k `predictions`. -If `class_id` is not specified, we'll calculate recall as how often on - average a class among the labels of a batch entry is in the top-k - `predictions`. - -`streaming_sparse_recall_at_k` creates two local variables, -`true_positive_at_` and `false_negative_at_`, that are used to compute -the recall_at_k frequency. This frequency is ultimately returned as -`recall_at_`: an idempotent operation that simply divides -`true_positive_at_` by total (`true_positive_at_` + `recall_at_`). To -facilitate the estimation of recall@k over a stream of data, the function -utilizes three steps. -* A `top_k` operation computes a tensor whose elements indicate the top `k` - predictions of the `predictions` `Tensor`. -* Set operations are applied to `top_k` and `labels` to calculate true - positives and false negatives. -* An `update_op` operation increments `true_positive_at_` and - `false_negative_at_`. It also returns the recall value. - -##### Args: - - -* `predictions`: Float `Tensor` with shape [D1, ... DN, num_classes] where - N >= 1. Commonly, N=1 and predictions has shape [batch size, num_classes]. - The final dimension contains the logit values for each class. [D1, ... DN] - must match `labels`. -* `labels`: `int64` `Tensor` or `SparseTensor` with shape - [D1, ... DN, num_labels], where N >= 1 and num_labels is the number of - target classes for the associated prediction. Commonly, N=1 and `labels` - has shape [batch_size, num_labels]. [D1, ... DN] must match `labels`. - Values should be in range [0, num_classes], where num_classes is the last - dimension of `predictions`. -* `k`: Integer, k for @k metric. -* `class_id`: Integer class ID for which we want binary metrics. This should be - in range [0, num_classes], where num_classes is the last dimension of - `predictions`. -* `ignore_mask`: An optional, binary tensor whose shape is broadcastable to the - the first [D1, ... DN] dimensions of `predictions_idx` and `labels`. -* `metrics_collections`: An optional list of collections that values should - be added to. -* `updates_collections`: An optional list of collections that updates should - be added to. -* `name`: Name of new update operation, and namespace for other dependant ops. - -##### Returns: - - -* `recall`: Scalar `float64` `Tensor` with the value of `true_positives` divided - by the sum of `true_positives` and `false_negatives`. -* `update_op`: `Operation` that increments `true_positives` and - `false_negatives` variables appropriately, and whose value matches - `recall`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.decode_json_example.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.decode_json_example.md deleted file mode 100644 index bf5184c40a..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.decode_json_example.md +++ /dev/null @@ -1,25 +0,0 @@ -### `tf.decode_json_example(json_examples, name=None)` {#decode_json_example} - -Convert JSON-encoded Example records to binary protocol buffer strings. - -This op translates a tensor containing Example records, encoded using -the [standard JSON -mapping](https://developers.google.com/protocol-buffers/docs/proto3#json), -into a tensor containing the same records encoded as binary protocol -buffers. The resulting tensor can then be fed to any of the other -Example-parsing ops. - -##### Args: - - -* `json_examples`: A `Tensor` of type `string`. - Each string is a JSON object serialized according to the JSON - mapping of the Example proto. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `string`. - Each string is a binary Example protocol buffer corresponding - to the respective element of `json_examples`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.delete_session_tensor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.delete_session_tensor.md deleted file mode 100644 index 2f52941c5f..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.delete_session_tensor.md +++ /dev/null @@ -1,19 +0,0 @@ -### `tf.delete_session_tensor(name=None)` {#delete_session_tensor} - -Delete the tensor by feeding a tensor handle. - -This is EXPERIMENTAL and subject to change. - -Delete the tensor of a given tensor handle. The tensor is produced -in a previous run() and stored in the state of the session. - -##### Args: - - -* `name`: Optional name prefix for the return tensor. - -##### Returns: - - A pair of graph elements. The first is a placeholder for feeding a - tensor handle and the second is a deletion operation. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.div.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.div.md new file mode 100644 index 0000000000..92eba7927a --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.div.md @@ -0,0 +1,15 @@ +### `tf.div(x, y, name=None)` {#div} + +Returns x / y element-wise. + +##### Args: + + +* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`. +* `y`: A `Tensor`. Must have the same type as `x`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `x`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.dynamic_stitch.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.dynamic_stitch.md deleted file mode 100644 index 6bb1f8dd10..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.dynamic_stitch.md +++ /dev/null @@ -1,53 +0,0 @@ -### `tf.dynamic_stitch(indices, data, name=None)` {#dynamic_stitch} - -Interleave the values from the `data` tensors into a single tensor. - -Builds a merged tensor such that - - merged[indices[m][i, ..., j], ...] = data[m][i, ..., j, ...] - -For example, if each `indices[m]` is scalar or vector, we have - - # Scalar indices - merged[indices[m], ...] = data[m][...] - - # Vector indices - merged[indices[m][i], ...] = data[m][i, ...] - -Each `data[i].shape` must start with the corresponding `indices[i].shape`, -and the rest of `data[i].shape` must be constant w.r.t. `i`. That is, we -must have `data[i].shape = indices[i].shape + constant`. In terms of this -`constant`, the output shape is - - merged.shape = [max(indices)] + constant - -Values are merged in order, so if an index appears in both `indices[m][i]` and -`indices[n][j]` for `(m,i) < (n,j)` the slice `data[n][j]` will appear in the -merged result. - -For example: - - indices[0] = 6 - indices[1] = [4, 1] - indices[2] = [[5, 2], [0, 3]] - data[0] = [61, 62] - data[1] = [[41, 42], [11, 12]] - data[2] = [[[51, 52], [21, 22]], [[1, 2], [31, 32]]] - merged = [[1, 2], [11, 12], [21, 22], [31, 32], [41, 42], - [51, 52], [61, 62]] - -
- -
- -##### Args: - - -* `indices`: A list of at least 2 `Tensor` objects of type `int32`. -* `data`: A list with the same number of `Tensor` objects as `indices` of `Tensor` objects of the same type. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `data`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.erfc.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.erfc.md new file mode 100644 index 0000000000..d2ac7952e0 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.erfc.md @@ -0,0 +1,14 @@ +### `tf.erfc(x, name=None)` {#erfc} + +Computes the complementary error function of `x` element-wise. + +##### Args: + + +* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `x`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.errors.AlreadyExistsError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.errors.AlreadyExistsError.md new file mode 100644 index 0000000000..85425df298 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.errors.AlreadyExistsError.md @@ -0,0 +1,14 @@ +Raised when an entity that we attempted to create already exists. + +For example, running an operation that saves a file +(e.g. [`tf.train.Saver.save()`](../../api_docs/python/train.md#Saver.save)) +could potentially raise this exception if an explicit filename for an +existing file was passed. + +- - - + +#### `tf.errors.AlreadyExistsError.__init__(node_def, op, message)` {#AlreadyExistsError.__init__} + +Creates an `AlreadyExistsError`. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.errors.InvalidArgumentError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.errors.InvalidArgumentError.md new file mode 100644 index 0000000000..877325fe0b --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.errors.InvalidArgumentError.md @@ -0,0 +1,17 @@ +Raised when an operation receives an invalid argument. + +This may occur, for example, if an operation is receives an input +tensor that has an invalid value or shape. For example, the +[`tf.matmul()`](../../api_docs/python/math_ops.md#matmul) op will raise this +error if it receives an input that is not a matrix, and the +[`tf.reshape()`](../../api_docs/python/array_ops.md#reshape) op will raise +this error if the new shape does not match the number of elements in the input +tensor. + +- - - + +#### `tf.errors.InvalidArgumentError.__init__(node_def, op, message)` {#InvalidArgumentError.__init__} + +Creates an `InvalidArgumentError`. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.errors.UnavailableError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.errors.UnavailableError.md new file mode 100644 index 0000000000..e212ae94ec --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.errors.UnavailableError.md @@ -0,0 +1,11 @@ +Raised when the runtime is currently unavailable. + +This exception is not currently used. + +- - - + +#### `tf.errors.UnavailableError.__init__(node_def, op, message)` {#UnavailableError.__init__} + +Creates an `UnavailableError`. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.greater.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.greater.md deleted file mode 100644 index c629a0286f..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.greater.md +++ /dev/null @@ -1,15 +0,0 @@ -### `tf.greater(x, y, name=None)` {#greater} - -Returns the truth value of (x > y) element-wise. - -##### Args: - - -* `x`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`. -* `y`: A `Tensor`. Must have the same type as `x`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `bool`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.group.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.group.md new file mode 100644 index 0000000000..7958cf9e58 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.group.md @@ -0,0 +1,25 @@ +### `tf.group(*inputs, **kwargs)` {#group} + +Create an op that groups multiple operations. + +When this op finishes, all ops in `input` have finished. This op has no +output. + +See also `tuple` and `with_dependencies`. + +##### Args: + + +* `*inputs`: Zero or more tensors to group. +* `**kwargs`: Optional parameters to pass when constructing the NodeDef. +* `name`: A name for this operation (optional). + +##### Returns: + + An Operation that executes all its inputs. + +##### Raises: + + +* `ValueError`: If an unknown keyword argument is provided. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.encode_jpeg.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.encode_jpeg.md deleted file mode 100644 index 24b1886c10..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.encode_jpeg.md +++ /dev/null @@ -1,51 +0,0 @@ -### `tf.image.encode_jpeg(image, format=None, quality=None, progressive=None, optimize_size=None, chroma_downsampling=None, density_unit=None, x_density=None, y_density=None, xmp_metadata=None, name=None)` {#encode_jpeg} - -JPEG-encode an image. - -`image` is a 3-D uint8 Tensor of shape `[height, width, channels]`. - -The attr `format` can be used to override the color format of the encoded -output. Values can be: - -* `''`: Use a default format based on the number of channels in the image. -* `grayscale`: Output a grayscale JPEG image. The `channels` dimension - of `image` must be 1. -* `rgb`: Output an RGB JPEG image. The `channels` dimension - of `image` must be 3. - -If `format` is not specified or is the empty string, a default format is picked -in function of the number of channels in `image`: - -* 1: Output a grayscale image. -* 3: Output an RGB image. - -##### Args: - - -* `image`: A `Tensor` of type `uint8`. - 3-D with shape `[height, width, channels]`. -* `format`: An optional `string` from: `"", "grayscale", "rgb"`. Defaults to `""`. - Per pixel image format. -* `quality`: An optional `int`. Defaults to `95`. - Quality of the compression from 0 to 100 (higher is better and slower). -* `progressive`: An optional `bool`. Defaults to `False`. - If True, create a JPEG that loads progressively (coarse to fine). -* `optimize_size`: An optional `bool`. Defaults to `False`. - If True, spend CPU/RAM to reduce size with no quality change. -* `chroma_downsampling`: An optional `bool`. Defaults to `True`. - See http://en.wikipedia.org/wiki/Chroma_subsampling. -* `density_unit`: An optional `string` from: `"in", "cm"`. Defaults to `"in"`. - Unit used to specify `x_density` and `y_density`: - pixels per inch (`'in'`) or centimeter (`'cm'`). -* `x_density`: An optional `int`. Defaults to `300`. - Horizontal pixels per density unit. -* `y_density`: An optional `int`. Defaults to `300`. - Vertical pixels per density unit. -* `xmp_metadata`: An optional `string`. Defaults to `""`. - If not empty, embed this XMP metadata in the image header. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `string`. 0-D. JPEG-encoded image. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.extract_glimpse.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.extract_glimpse.md deleted file mode 100644 index e0ca72e2c5..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.extract_glimpse.md +++ /dev/null @@ -1,31 +0,0 @@ -### `tf.image.extract_glimpse(input, size, offsets, centered=None, normalized=None, uniform_noise=None, name=None)` {#extract_glimpse} - -Extracts a glimpse from the input tensor. - -Returns a set of windows called glimpses extracted at location -`offsets` from the input tensor. If the windows only partially -overlaps the inputs, the non overlapping areas will be filled with -random noise. - -The result is a 4-D tensor of shape `[batch_size, glimpse_height, -glimpse_width, channels]`. The channels and batch dimensions are the -same as that of the input tensor. The height and width of the output -windows are specified in the `size` parameter. - -The argument `normalized` and `centered` controls how the windows are - -##### Args: - - -* `input`: A `Tensor` of type `float32`. -* `size`: A `Tensor` of type `int32`. -* `offsets`: A `Tensor` of type `float32`. -* `centered`: An optional `bool`. Defaults to `True`. -* `normalized`: An optional `bool`. Defaults to `True`. -* `uniform_noise`: An optional `bool`. Defaults to `True`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `float32`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.random_brightness.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.random_brightness.md deleted file mode 100644 index 6c773b6985..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.random_brightness.md +++ /dev/null @@ -1,25 +0,0 @@ -### `tf.image.random_brightness(image, max_delta, seed=None)` {#random_brightness} - -Adjust the brightness of images by a random factor. - -Equivalent to `adjust_brightness()` using a `delta` randomly picked in the -interval `[-max_delta, max_delta)`. - -##### Args: - - -* `image`: An image. -* `max_delta`: float, must be non-negative. -* `seed`: A Python integer. Used to create a random seed. See - [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) - for behavior. - -##### Returns: - - The brightness-adjusted image. - -##### Raises: - - -* `ValueError`: if `max_delta` is negative. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.random_contrast.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.random_contrast.md new file mode 100644 index 0000000000..76cd2292cf --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.random_contrast.md @@ -0,0 +1,26 @@ +### `tf.image.random_contrast(image, lower, upper, seed=None)` {#random_contrast} + +Adjust the contrast of an image by a random factor. + +Equivalent to `adjust_contrast()` but uses a `contrast_factor` randomly +picked in the interval `[lower, upper]`. + +##### Args: + + +* `image`: An image tensor with 3 or more dimensions. +* `lower`: float. Lower bound for the random contrast factor. +* `upper`: float. Upper bound for the random contrast factor. +* `seed`: A Python integer. Used to create a random seed. See + [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) + for behavior. + +##### Returns: + + The contrast-adjusted tensor. + +##### Raises: + + +* `ValueError`: if `upper <= lower` or if `lower < 0`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.resize_area.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.resize_area.md new file mode 100644 index 0000000000..dbc6fd1bcd --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.resize_area.md @@ -0,0 +1,24 @@ +### `tf.image.resize_area(images, size, align_corners=None, name=None)` {#resize_area} + +Resize `images` to `size` using area interpolation. + +Input images can be of different types but output images are always float. + +##### Args: + + +* `images`: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int16`, `int32`, `int64`, `half`, `float32`, `float64`. + 4-D with shape `[batch, height, width, channels]`. +* `size`: A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The + new size for the images. +* `align_corners`: An optional `bool`. Defaults to `False`. + If true, rescale input by (new_height - 1) / (height - 1), which + exactly aligns the 4 corners of images and resized images. If false, rescale + by new_height / height. Treat similarly the width dimension. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `float32`. 4-D with shape + `[batch, new_height, new_width, channels]`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.resize_images.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.resize_images.md deleted file mode 100644 index d010cac831..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.resize_images.md +++ /dev/null @@ -1,43 +0,0 @@ -### `tf.image.resize_images(images, new_height, new_width, method=0, align_corners=False)` {#resize_images} - -Resize `images` to `new_width`, `new_height` using the specified `method`. - -Resized images will be distorted if their original aspect ratio is not -the same as `new_width`, `new_height`. To avoid distortions see -[`resize_image_with_crop_or_pad`](#resize_image_with_crop_or_pad). - -`method` can be one of: - -* `ResizeMethod.BILINEAR`: [Bilinear interpolation.] - (https://en.wikipedia.org/wiki/Bilinear_interpolation) -* `ResizeMethod.NEAREST_NEIGHBOR`: [Nearest neighbor interpolation.] - (https://en.wikipedia.org/wiki/Nearest-neighbor_interpolation) -* `ResizeMethod.BICUBIC`: [Bicubic interpolation.] - (https://en.wikipedia.org/wiki/Bicubic_interpolation) -* `ResizeMethod.AREA`: Area interpolation. - -##### Args: - - -* `images`: 4-D Tensor of shape `[batch, height, width, channels]` or - 3-D Tensor of shape `[height, width, channels]`. -* `new_height`: integer. -* `new_width`: integer. -* `method`: ResizeMethod. Defaults to `ResizeMethod.BILINEAR`. -* `align_corners`: bool. If true, exactly align all 4 corners of the input and - output. Defaults to `false`. - -##### Raises: - - -* `ValueError`: if the shape of `images` is incompatible with the - shape arguments to this function -* `ValueError`: if an unsupported resize method is specified. - -##### Returns: - - If `images` was 4-D, a 4-D float Tensor of shape - `[batch, new_height, new_width, channels]`. - If `images` was 3-D, a 3-D float Tensor of shape - `[new_height, new_width, channels]`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.transpose_image.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.transpose_image.md deleted file mode 100644 index 1cc527d345..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.transpose_image.md +++ /dev/null @@ -1,20 +0,0 @@ -### `tf.image.transpose_image(image)` {#transpose_image} - -Transpose an image by swapping the first and second dimension. - -See also `transpose()`. - -##### Args: - - -* `image`: 3-D tensor of shape `[height, width, channels]` - -##### Returns: - - A 3-D tensor of shape `[width, height, channels]` - -##### Raises: - - -* `ValueError`: if the shape of `image` not supported. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.initialize_local_variables.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.initialize_local_variables.md new file mode 100644 index 0000000000..2a56dbb9d6 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.initialize_local_variables.md @@ -0,0 +1,10 @@ +### `tf.initialize_local_variables()` {#initialize_local_variables} + +Returns an Op that initializes all local variables. + +This is just a shortcut for `initialize_variables(local_variables())` + +##### Returns: + + An Op that initializes all local variables in the graph. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.inv.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.inv.md new file mode 100644 index 0000000000..dfff52be12 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.inv.md @@ -0,0 +1,16 @@ +### `tf.inv(x, name=None)` {#inv} + +Computes the reciprocal of x element-wise. + +I.e., \\(y = 1 / x\\). + +##### Args: + + +* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `x`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.matching_files.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.matching_files.md deleted file mode 100644 index 297462d580..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.matching_files.md +++ /dev/null @@ -1,17 +0,0 @@ -### `tf.matching_files(pattern, name=None)` {#matching_files} - -Returns the set of files matching a pattern. - -Note that this routine only supports wildcard characters in the -basename portion of the pattern, not in the directory portion. - -##### Args: - - -* `pattern`: A `Tensor` of type `string`. A (scalar) shell wildcard pattern. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `string`. A vector of matching filenames. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.matrix_solve_ls.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.matrix_solve_ls.md new file mode 100644 index 0000000000..8f5548d2cb --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.matrix_solve_ls.md @@ -0,0 +1,47 @@ +### `tf.matrix_solve_ls(matrix, rhs, l2_regularizer=0.0, fast=True, name=None)` {#matrix_solve_ls} + +Solves a linear least-squares problem. + +Below we will use the following notation +`matrix`=\\(A \in \Re^{m \times n}\\), +`rhs`=\\(B \in \Re^{m \times k}\\), +`output`=\\(X \in \Re^{n \times k}\\), +`l2_regularizer`=\\(\lambda\\). + +If `fast` is `True`, then the solution is computed by solving the normal +equations using Cholesky decomposition. Specifically, if \\(m \ge n\\) then +\\(X = (A^T A + \lambda I)^{-1} A^T B\\), which solves the regularized +least-squares problem \\(X = \mathrm{argmin}_{Z \in \Re^{n \times k}} +||A Z - B||_F^2 + \lambda ||Z||_F^2\\). If \\(m \lt n\\) then `output` is +computed as \\(X = A^T (A A^T + \lambda I)^{-1} B\\), +which (for \\(\lambda = 0\\)) is the minimum-norm solution to the +under-determined linear system, i.e. +\\(X = \mathrm{argmin}_{Z \in \Re^{n \times k}} ||Z||_F^2 \\), +subject to \\(A Z = B\\). +Notice that the fast path is only numerically stable when \\(A\\) is +numerically full rank and has a condition number +\\(\mathrm{cond}(A) \lt \frac{1}{\sqrt{\epsilon_{mach}}}\\) +or \\(\lambda\\) is sufficiently large. + +If `fast` is `False` then the solution is computed using the rank revealing +QR decomposition with column pivoting. This will always compute a +least-squares solution that minimizes the residual norm +\\(||A X - B||_F^2 \\), even when \\(A\\) is rank deficient or +ill-conditioned. Notice: The current version does not compute a minimum norm +solution. If `fast` is `False` then `l2_regularizer` is ignored. + +##### Args: + + +* `matrix`: 2-D `Tensor` of shape `[M, N]`. +* `rhs`: 2-D `Tensor` of shape is `[M, K]`. +* `l2_regularizer`: 0-D `double` `Tensor`. Ignored if `fast=False`. +* `fast`: bool. Defaults to `True`. +* `name`: string, optional name of the operation. + +##### Returns: + + +* `output`: Matrix of shape `[N, K]` containing the matrix that solves + `matrix * output = rhs` in the least-squares sense. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.merge_all_summaries.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.merge_all_summaries.md new file mode 100644 index 0000000000..40143de15d --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.merge_all_summaries.md @@ -0,0 +1,16 @@ +### `tf.merge_all_summaries(key='summaries')` {#merge_all_summaries} + +Merges all summaries collected in the default graph. + +##### Args: + + +* `key`: `GraphKey` used to collect the summaries. Defaults to + `GraphKeys.SUMMARIES`. + +##### Returns: + + If no summaries were collected, returns None. Otherwise returns a scalar + `Tensor` of type `string` containing the serialized `Summary` protocol + buffer resulting from the merging. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.batch_normalization.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.batch_normalization.md new file mode 100644 index 0000000000..eda1d7d053 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.batch_normalization.md @@ -0,0 +1,46 @@ +### `tf.nn.batch_normalization(x, mean, variance, offset, scale, variance_epsilon, name=None)` {#batch_normalization} + +Batch normalization. + +As described in http://arxiv.org/abs/1502.03167. +Normalizes a tensor by `mean` and `variance`, and applies (optionally) a +`scale` \\(\gamma\\) to it, as well as an `offset` \\(\beta\\): + +\\(\frac{\gamma(x-\mu)}{\sigma}+\beta\\) + +`mean`, `variance`, `offset` and `scale` are all expected to be of one of two +shapes: + * In all generality, they can have the same number of dimensions as the + input `x`, with identical sizes as `x` for the dimensions that are not + normalized over (the 'depth' dimension(s)), and dimension 1 for the + others which are being normalized over. + `mean` and `variance` in this case would typically be the outputs of + `tf.nn.moments(..., keep_dims=True)` during training, or running averages + thereof during inference. + * In the common case where the 'depth' dimension is the last dimension in + the input tensor `x`, they may be one dimensional tensors of the same + size as the 'depth' dimension. + This is the case for example for the common `[batch, depth]` layout of + fully-connected layers, and `[batch, height, width, depth]` for + convolutions. + `mean` and `variance` in this case would typically be the outputs of + `tf.nn.moments(..., keep_dims=False)` during training, or running averages + thereof during inference. + +##### Args: + + +* `x`: Input `Tensor` of arbitrary dimensionality. +* `mean`: A mean `Tensor`. +* `variance`: A variance `Tensor`. +* `offset`: An offset `Tensor`, often denoted \\(\beta\\) in equations, or + None. If present, will be added to the normalized tensor. +* `scale`: A scale `Tensor`, often denoted \\(\gamma\\) in equations, or + `None`. If present, the scale is applied to the normalized tensor. +* `variance_epsilon`: A small float number to avoid dividing by 0. +* `name`: A name for this operation (optional). + +##### Returns: + + the normalized, scaled, offset tensor. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.nce_loss.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.nce_loss.md deleted file mode 100644 index 2fc7ab6b65..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.nce_loss.md +++ /dev/null @@ -1,53 +0,0 @@ -### `tf.nn.nce_loss(weights, biases, inputs, labels, num_sampled, num_classes, num_true=1, sampled_values=None, remove_accidental_hits=False, partition_strategy='mod', name='nce_loss')` {#nce_loss} - -Computes and returns the noise-contrastive estimation training loss. - -See [Noise-contrastive estimation: A new estimation principle for -unnormalized statistical models] -(http://www.jmlr.org/proceedings/papers/v9/gutmann10a/gutmann10a.pdf). -Also see our [Candidate Sampling Algorithms Reference] -(../../extras/candidate_sampling.pdf) - -Note: In the case where `num_true` > 1, we assign to each target class -the target probability 1 / `num_true` so that the target probabilities -sum to 1 per-example. - -Note: It would be useful to allow a variable number of target classes per -example. We hope to provide this functionality in a future release. -For now, if you have a variable number of target classes, you can pad them -out to a constant number by either repeating them or by padding -with an otherwise unused class. - -##### Args: - - -* `weights`: A `Tensor` of shape `[num_classes, dim]`, or a list of `Tensor` - objects whose concatenation along dimension 0 has shape - [num_classes, dim]. The (possibly-partitioned) class embeddings. -* `biases`: A `Tensor` of shape `[num_classes]`. The class biases. -* `inputs`: A `Tensor` of shape `[batch_size, dim]`. The forward - activations of the input network. -* `labels`: A `Tensor` of type `int64` and shape `[batch_size, - num_true]`. The target classes. -* `num_sampled`: An `int`. The number of classes to randomly sample per batch. -* `num_classes`: An `int`. The number of possible classes. -* `num_true`: An `int`. The number of target classes per training example. -* `sampled_values`: a tuple of (`sampled_candidates`, `true_expected_count`, - `sampled_expected_count`) returned by a `*_candidate_sampler` function. - (if None, we default to `log_uniform_candidate_sampler`) -* `remove_accidental_hits`: A `bool`. Whether to remove "accidental hits" - where a sampled class equals one of the target classes. If set to - `True`, this is a "Sampled Logistic" loss instead of NCE, and we are - learning to generate log-odds instead of log probabilities. See - our [Candidate Sampling Algorithms Reference] - (../../extras/candidate_sampling.pdf). - Default is False. -* `partition_strategy`: A string specifying the partitioning strategy, relevant - if `len(weights) > 1`. Currently `"div"` and `"mod"` are supported. - Default is `"mod"`. See `tf.nn.embedding_lookup` for more details. -* `name`: A name for the operation (optional). - -##### Returns: - - A `batch_size` 1-D tensor of per-example NCE losses. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.relu.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.relu.md deleted file mode 100644 index 5811a1da96..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.relu.md +++ /dev/null @@ -1,14 +0,0 @@ -### `tf.nn.relu(features, name=None)` {#relu} - -Computes rectified linear: `max(features, 0)`. - -##### Args: - - -* `features`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `features`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.softmax_cross_entropy_with_logits.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.softmax_cross_entropy_with_logits.md deleted file mode 100644 index d6054c49ac..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.softmax_cross_entropy_with_logits.md +++ /dev/null @@ -1,36 +0,0 @@ -### `tf.nn.softmax_cross_entropy_with_logits(logits, labels, name=None)` {#softmax_cross_entropy_with_logits} - -Computes softmax cross entropy between `logits` and `labels`. - -Measures the probability error in discrete classification tasks in which the -classes are mutually exclusive (each entry is in exactly one class). For -example, each CIFAR-10 image is labeled with one and only one label: an image -can be a dog or a truck, but not both. - -**NOTE:** While the classes are mutually exclusive, their probabilities -need not be. All that is required is that each row of `labels` is -a valid probability distribution. If they are not, the computation of the -gradient will be incorrect. - -If using exclusive `labels` (wherein one and only -one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`. - -**WARNING:** This op expects unscaled logits, since it performs a `softmax` -on `logits` internally for efficiency. Do not call this op with the -output of `softmax`, as it will produce incorrect results. - -`logits` and `labels` must have the same shape `[batch_size, num_classes]` -and the same dtype (either `float32` or `float64`). - -##### Args: - - -* `logits`: Unscaled log probabilities. -* `labels`: Each row `labels[i]` must be a valid probability distribution. -* `name`: A name for the operation (optional). - -##### Returns: - - A 1-D `Tensor` of length `batch_size` of the same type as `logits` with the - softmax cross entropy loss. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.softplus.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.softplus.md deleted file mode 100644 index c0faef9687..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.softplus.md +++ /dev/null @@ -1,14 +0,0 @@ -### `tf.nn.softplus(features, name=None)` {#softplus} - -Computes softplus: `log(exp(features) + 1)`. - -##### Args: - - -* `features`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `features`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.top_k.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.top_k.md deleted file mode 100644 index 819c0ad068..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.top_k.md +++ /dev/null @@ -1,31 +0,0 @@ -### `tf.nn.top_k(input, k=1, sorted=True, name=None)` {#top_k} - -Finds values and indices of the `k` largest entries for the last dimension. - -If the input is a vector (rank-1), finds the `k` largest entries in the vector -and outputs their values and indices as vectors. Thus `values[j]` is the -`j`-th largest entry in `input`, and its index is `indices[j]`. - -For matrices (resp. higher rank input), computes the top `k` entries in each -row (resp. vector along the last dimension). Thus, - - values.shape = indices.shape = input.shape[:-1] + [k] - -If two elements are equal, the lower-index element appears first. - -##### Args: - - -* `input`: 1-D or higher `Tensor` with last dimension at least `k`. -* `k`: 0-D `int32` `Tensor`. Number of top elements to look for along the last - dimension (along each row for matrices). -* `sorted`: If true the resulting `k` elements will be sorted by the values in - descending order. -* `name`: Optional name for the operation. - -##### Returns: - - -* `values`: The `k` largest elements along each last dimensional slice. -* `indices`: The indices of `values` within the last dimension of `input`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.uniform_candidate_sampler.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.uniform_candidate_sampler.md deleted file mode 100644 index c34056dc84..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.uniform_candidate_sampler.md +++ /dev/null @@ -1,49 +0,0 @@ -### `tf.nn.uniform_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, seed=None, name=None)` {#uniform_candidate_sampler} - -Samples a set of classes using a uniform base distribution. - -This operation randomly samples a tensor of sampled classes -(`sampled_candidates`) from the range of integers `[0, range_max)`. - -The elements of `sampled_candidates` are drawn without replacement -(if `unique=True`) or with replacement (if `unique=False`) from -the base distribution. - -The base distribution for this operation is the uniform distribution -over the range of integers `[0, range_max)`. - -In addition, this operation returns tensors `true_expected_count` -and `sampled_expected_count` representing the number of times each -of the target classes (`true_classes`) and the sampled -classes (`sampled_candidates`) is expected to occur in an average -tensor of sampled classes. These values correspond to `Q(y|x)` -defined in [this -document](http://www.tensorflow.org/extras/candidate_sampling.pdf). -If `unique=True`, then these are post-rejection probabilities and we -compute them approximately. - -##### Args: - - -* `true_classes`: A `Tensor` of type `int64` and shape `[batch_size, - num_true]`. The target classes. -* `num_true`: An `int`. The number of target classes per training example. -* `num_sampled`: An `int`. The number of classes to randomly sample per batch. -* `unique`: A `bool`. Determines whether all sampled classes in a batch are - unique. -* `range_max`: An `int`. The number of possible classes. -* `seed`: An `int`. An operation-specific seed. Default is 0. -* `name`: A name for the operation (optional). - -##### Returns: - - -* `sampled_candidates`: A tensor of type `int64` and shape `[num_sampled]`. - The sampled classes. -* `true_expected_count`: A tensor of type `float`. Same shape as - `true_classes`. The expected counts under the sampling distribution - of each of `true_classes`. -* `sampled_expected_count`: A tensor of type `float`. Same shape as - `sampled_candidates`. The expected counts under the sampling distribution - of each of `sampled_candidates`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.no_op.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.no_op.md new file mode 100644 index 0000000000..c1b5c0824b --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.no_op.md @@ -0,0 +1,13 @@ +### `tf.no_op(name=None)` {#no_op} + +Does nothing. Only useful as a placeholder for control edges. + +##### Args: + + +* `name`: A name for the operation (optional). + +##### Returns: + + The created Operation. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.parse_single_example.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.parse_single_example.md deleted file mode 100644 index e0bce09137..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.parse_single_example.md +++ /dev/null @@ -1,35 +0,0 @@ -### `tf.parse_single_example(serialized, features, name=None, example_names=None)` {#parse_single_example} - -Parses a single `Example` proto. - -Similar to `parse_example`, except: - -For dense tensors, the returned `Tensor` is identical to the output of -`parse_example`, except there is no batch dimension, the output shape is the -same as the shape given in `dense_shape`. - -For `SparseTensor`s, the first (batch) column of the indices matrix is removed -(the indices matrix is a column vector), the values vector is unchanged, and -the first (`batch_size`) entry of the shape vector is removed (it is now a -single element vector). - -##### Args: - - -* `serialized`: A scalar string Tensor, a single serialized Example. - See `_parse_single_example_raw` documentation for more details. -* `features`: A `dict` mapping feature keys to `FixedLenFeature` or - `VarLenFeature` values. -* `name`: A name for this operation (optional). -* `example_names`: (Optional) A scalar string Tensor, the associated name. - See `_parse_single_example_raw` documentation for more details. - -##### Returns: - - A `dict` mapping feature keys to `Tensor` and `SparseTensor` values. - -##### Raises: - - -* `ValueError`: if any feature is invalid. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.polygamma.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.polygamma.md new file mode 100644 index 0000000000..c8b5b2578a --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.polygamma.md @@ -0,0 +1,22 @@ +### `tf.polygamma(a, x, name=None)` {#polygamma} + +Compute the polygamma function \\(\psi^{(n)}(x)\\). + +The polygamma function is defined as: + +``` +\psi^{(n)}(x) = \frac{d^n}{dx^n} \psi(x) +``` +where \\(\psi(x)\\) is the digamma function. + +##### Args: + + +* `a`: A `Tensor`. Must be one of the following types: `float32`, `float64`. +* `x`: A `Tensor`. Must have the same type as `a`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `a`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.pow.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.pow.md deleted file mode 100644 index 8588b72fb8..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.pow.md +++ /dev/null @@ -1,24 +0,0 @@ -### `tf.pow(x, y, name=None)` {#pow} - -Computes the power of one value to another. - -Given a tensor `x` and a tensor `y`, this operation computes \\(x^y\\) for -corresponding elements in `x` and `y`. For example: - -``` -# tensor 'x' is [[2, 2], [3, 3]] -# tensor 'y' is [[8, 16], [2, 3]] -tf.pow(x, y) ==> [[256, 65536], [9, 27]] -``` - -##### Args: - - -* `x`: A `Tensor` of type `float`, `double`, `int32`, `complex64`, or `int64`. -* `y`: A `Tensor` of type `float`, `double`, `int32`, `complex64`, or `int64`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.reduce_any.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.reduce_any.md deleted file mode 100644 index 58a911a8cf..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.reduce_any.md +++ /dev/null @@ -1,35 +0,0 @@ -### `tf.reduce_any(input_tensor, reduction_indices=None, keep_dims=False, name=None)` {#reduce_any} - -Computes the "logical or" of elements across dimensions of a tensor. - -Reduces `input_tensor` along the dimensions given in `reduction_indices`. -Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each -entry in `reduction_indices`. If `keep_dims` is true, the reduced dimensions -are retained with length 1. - -If `reduction_indices` has no entries, all dimensions are reduced, and a -tensor with a single element is returned. - -For example: - -```python -# 'x' is [[True, True] -# [False, False]] -tf.reduce_any(x) ==> True -tf.reduce_any(x, 0) ==> [True, True] -tf.reduce_any(x, 1) ==> [True, False] -``` - -##### Args: - - -* `input_tensor`: The boolean tensor to reduce. -* `reduction_indices`: The dimensions to reduce. If `None` (the default), - reduces all dimensions. -* `keep_dims`: If true, retains reduced dimensions with length 1. -* `name`: A name for the operation (optional). - -##### Returns: - - The reduced tensor. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.reduce_prod.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.reduce_prod.md deleted file mode 100644 index a87daa33fb..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.reduce_prod.md +++ /dev/null @@ -1,25 +0,0 @@ -### `tf.reduce_prod(input_tensor, reduction_indices=None, keep_dims=False, name=None)` {#reduce_prod} - -Computes the product of elements across dimensions of a tensor. - -Reduces `input_tensor` along the dimensions given in `reduction_indices`. -Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each -entry in `reduction_indices`. If `keep_dims` is true, the reduced dimensions -are retained with length 1. - -If `reduction_indices` has no entries, all dimensions are reduced, and a -tensor with a single element is returned. - -##### Args: - - -* `input_tensor`: The tensor to reduce. Should have numeric type. -* `reduction_indices`: The dimensions to reduce. If `None` (the default), - reduces all dimensions. -* `keep_dims`: If true, retains reduced dimensions with length 1. -* `name`: A name for the operation (optional). - -##### Returns: - - The reduced tensor. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.report_uninitialized_variables.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.report_uninitialized_variables.md new file mode 100644 index 0000000000..35536e65d9 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.report_uninitialized_variables.md @@ -0,0 +1,19 @@ +### `tf.report_uninitialized_variables(var_list=None, name='report_uninitialized_variables')` {#report_uninitialized_variables} + +Adds ops to list the names of uninitialized variables. + +When run, it returns a 1-D tensor containing the names of uninitialized +variables if there are any, or an empty array if there are none. + +##### Args: + + +* `var_list`: List of `Variable` objects to check. Defaults to the + value of `all_variables() + local_variables()` +* `name`: Optional name of the `Operation`. + +##### Returns: + + A 1-D tensor containing names of the unintialized variables, or an empty 1-D + tensor if there are no variables or no uninitialized variables. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.reset_default_graph.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.reset_default_graph.md deleted file mode 100644 index ae5a906a0d..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.reset_default_graph.md +++ /dev/null @@ -1,10 +0,0 @@ -### `tf.reset_default_graph()` {#reset_default_graph} - -Clears the default graph stack and resets the global default graph. - -NOTE: The default graph is a property of the current thread. This -function applies only to the current thread. Calling this function while -a `tf.Session` or `tf.InteractiveSession` is active will result in undefined -behavior. Using any previously created `tf.Operation` or `tf.Tensor` objects -after calling this function will result in undefined behavior. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.reverse_sequence.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.reverse_sequence.md new file mode 100644 index 0000000000..fac4ac2ebe --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.reverse_sequence.md @@ -0,0 +1,76 @@ +### `tf.reverse_sequence(input, seq_lengths, seq_dim, batch_dim=None, name=None)` {#reverse_sequence} + +Reverses variable length slices. + +This op first slices `input` along the dimension `batch_dim`, and for each +slice `i`, reverses the first `seq_lengths[i]` elements along +the dimension `seq_dim`. + +The elements of `seq_lengths` must obey `seq_lengths[i] < input.dims[seq_dim]`, +and `seq_lengths` must be a vector of length `input.dims[batch_dim]`. + +The output slice `i` along dimension `batch_dim` is then given by input +slice `i`, with the first `seq_lengths[i]` slices along dimension +`seq_dim` reversed. + +For example: + +```prettyprint +# Given this: +batch_dim = 0 +seq_dim = 1 +input.dims = (4, 8, ...) +seq_lengths = [7, 2, 3, 5] + +# then slices of input are reversed on seq_dim, but only up to seq_lengths: +output[0, 0:7, :, ...] = input[0, 7:0:-1, :, ...] +output[1, 0:2, :, ...] = input[1, 2:0:-1, :, ...] +output[2, 0:3, :, ...] = input[2, 3:0:-1, :, ...] +output[3, 0:5, :, ...] = input[3, 5:0:-1, :, ...] + +# while entries past seq_lens are copied through: +output[0, 7:, :, ...] = input[0, 7:, :, ...] +output[1, 2:, :, ...] = input[1, 2:, :, ...] +output[2, 3:, :, ...] = input[2, 3:, :, ...] +output[3, 2:, :, ...] = input[3, 2:, :, ...] +``` + +In contrast, if: + +```prettyprint +# Given this: +batch_dim = 2 +seq_dim = 0 +input.dims = (8, ?, 4, ...) +seq_lengths = [7, 2, 3, 5] + +# then slices of input are reversed on seq_dim, but only up to seq_lengths: +output[0:7, :, 0, :, ...] = input[7:0:-1, :, 0, :, ...] +output[0:2, :, 1, :, ...] = input[2:0:-1, :, 1, :, ...] +output[0:3, :, 2, :, ...] = input[3:0:-1, :, 2, :, ...] +output[0:5, :, 3, :, ...] = input[5:0:-1, :, 3, :, ...] + +# while entries past seq_lens are copied through: +output[7:, :, 0, :, ...] = input[7:, :, 0, :, ...] +output[2:, :, 1, :, ...] = input[2:, :, 1, :, ...] +output[3:, :, 2, :, ...] = input[3:, :, 2, :, ...] +output[2:, :, 3, :, ...] = input[2:, :, 3, :, ...] +``` + +##### Args: + + +* `input`: A `Tensor`. The input to reverse. +* `seq_lengths`: A `Tensor` of type `int64`. + 1-D with length `input.dims(batch_dim)` and + `max(seq_lengths) < input.dims(seq_dim)` +* `seq_dim`: An `int`. The dimension which is partially reversed. +* `batch_dim`: An optional `int`. Defaults to `0`. + The dimension along which reversal is performed. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `input`. + The partially reversed input. It has the same shape as `input`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.round.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.round.md deleted file mode 100644 index 8d2ce32921..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.round.md +++ /dev/null @@ -1,21 +0,0 @@ -### `tf.round(x, name=None)` {#round} - -Rounds the values of a tensor to the nearest integer, element-wise. - -For example: - -```python -# 'a' is [0.9, 2.5, 2.3, -4.4] -tf.round(a) ==> [ 1.0, 3.0, 2.0, -4.0 ] -``` - -##### Args: - - -* `x`: A `Tensor` of type `float` or `double`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of same shape and type as `x`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.self_adjoint_eig.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.self_adjoint_eig.md new file mode 100644 index 0000000000..efbc0cd3be --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.self_adjoint_eig.md @@ -0,0 +1,21 @@ +### `tf.self_adjoint_eig(input, name=None)` {#self_adjoint_eig} + +Calculates the Eigen Decomposition of a square Self-Adjoint matrix. + +Only the lower-triangular part of the input will be used in this case. The +upper-triangular part will not be read. + +The result is a M+1 x M matrix whose first row is the eigenvalues, and +subsequent rows are eigenvectors. + +##### Args: + + +* `input`: A `Tensor`. Must be one of the following types: `float64`, `float32`. + Shape is `[M, M]`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `input`. Shape is `[M+1, M]`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.set_random_seed.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.set_random_seed.md deleted file mode 100644 index af817dbafa..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.set_random_seed.md +++ /dev/null @@ -1,98 +0,0 @@ -### `tf.set_random_seed(seed)` {#set_random_seed} - -Sets the graph-level random seed. - -Operations that rely on a random seed actually derive it from two seeds: -the graph-level and operation-level seeds. This sets the graph-level seed. - -Its interactions with operation-level seeds is as follows: - - 1. If neither the graph-level nor the operation seed is set: - A random seed is used for this op. - 2. If the graph-level seed is set, but the operation seed is not: - The system deterministically picks an operation seed in conjunction - with the graph-level seed so that it gets a unique random sequence. - 3. If the graph-level seed is not set, but the operation seed is set: - A default graph-level seed and the specified operation seed are used to - determine the random sequence. - 4. If both the graph-level and the operation seed are set: - Both seeds are used in conjunction to determine the random sequence. - -To illustrate the user-visible effects, consider these examples: - -To generate different sequences across sessions, set neither -graph-level nor op-level seeds: - -```python -a = tf.random_uniform([1]) -b = tf.random_normal([1]) - -print("Session 1") -with tf.Session() as sess1: - print(sess1.run(a)) # generates 'A1' - print(sess1.run(a)) # generates 'A2' - print(sess1.run(b)) # generates 'B1' - print(sess1.run(b)) # generates 'B2' - -print("Session 2") -with tf.Session() as sess2: - print(sess2.run(a)) # generates 'A3' - print(sess2.run(a)) # generates 'A4' - print(sess2.run(b)) # generates 'B3' - print(sess2.run(b)) # generates 'B4' -``` - -To generate the same repeatable sequence for an op across sessions, set the -seed for the op: - -```python -a = tf.random_uniform([1], seed=1) -b = tf.random_normal([1]) - -# Repeatedly running this block with the same graph will generate the same -# sequence of values for 'a', but different sequences of values for 'b'. -print("Session 1") -with tf.Session() as sess1: - print(sess1.run(a)) # generates 'A1' - print(sess1.run(a)) # generates 'A2' - print(sess1.run(b)) # generates 'B1' - print(sess1.run(b)) # generates 'B2' - -print("Session 2") -with tf.Session() as sess2: - print(sess2.run(a)) # generates 'A1' - print(sess2.run(a)) # generates 'A2' - print(sess2.run(b)) # generates 'B3' - print(sess2.run(b)) # generates 'B4' -``` - -To make the random sequences generated by all ops be repeatable across -sessions, set a graph-level seed: - -```python -tf.set_random_seed(1234) -a = tf.random_uniform([1]) -b = tf.random_normal([1]) - -# Repeatedly running this block with the same graph will generate different -# sequences of 'a' and 'b'. -print("Session 1") -with tf.Session() as sess1: - print(sess1.run(a)) # generates 'A1' - print(sess1.run(a)) # generates 'A2' - print(sess1.run(b)) # generates 'B1' - print(sess1.run(b)) # generates 'B2' - -print("Session 2") -with tf.Session() as sess2: - print(sess2.run(a)) # generates 'A1' - print(sess2.run(a)) # generates 'A2' - print(sess2.run(b)) # generates 'B1' - print(sess2.run(b)) # generates 'B2' -``` - -##### Args: - - -* `seed`: integer. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.shape.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.shape.md new file mode 100644 index 0000000000..4262f41a3d --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.shape.md @@ -0,0 +1,23 @@ +### `tf.shape(input, name=None)` {#shape} + +Returns the shape of a tensor. + +This operation returns a 1-D integer tensor representing the shape of `input`. + +For example: + +```prettyprint +# 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]] +shape(t) ==> [2, 2, 3] +``` + +##### Args: + + +* `input`: A `Tensor`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `int32`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.sign.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.sign.md deleted file mode 100644 index f0c021a741..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.sign.md +++ /dev/null @@ -1,18 +0,0 @@ -### `tf.sign(x, name=None)` {#sign} - -Returns an element-wise indication of the sign of a number. - -`y = sign(x) = -1` if `x < 0`; 0 if `x == 0`; 1 if `x > 0`. - -For complex numbers, `y = sign(x) = x / |x|` if `x != 0`, otherwise `y = 0`. - -##### Args: - - -* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `x`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.space_to_batch.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.space_to_batch.md deleted file mode 100644 index 1999f21ea3..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.space_to_batch.md +++ /dev/null @@ -1,44 +0,0 @@ -### `tf.space_to_batch(input, paddings, block_size, name=None)` {#space_to_batch} - -SpaceToBatch for 4-D tensors of type T. - -Zero-pads and then rearranges (permutes) blocks of spatial data into batch. -More specifically, this op outputs a copy of the input tensor where values from -the `height` and `width` dimensions are moved to the `batch` dimension. After -the zero-padding, both `height` and `width` of the input must be divisible by the -block size. - -##### Args: - - -* `input`: A `Tensor`. 4-D with shape `[batch, height, width, depth]`. -* `paddings`: A `Tensor` of type `int32`. - 2-D tensor of non-negative integers with shape `[2, 2]`. It specifies - the padding of the input with zeros across the spatial dimensions as follows: - - paddings = [[pad_top, pad_bottom], [pad_left, pad_right]] - - The effective spatial dimensions of the zero-padded input tensor will be: - - height_pad = pad_top + height + pad_bottom - width_pad = pad_left + width + pad_right - - The attr `block_size` must be greater than one. It indicates the block size. - - * Non-overlapping blocks of size `block_size x block size` in the height and - width dimensions are rearranged into the batch dimension at each location. - * The batch of the output tensor is `batch * block_size * block_size`. - * Both height_pad and width_pad must be divisible by block_size. - - The shape of the output will be: - - [batch*block_size*block_size, height_pad/block_size, width_pad/block_size, - depth] - -* `block_size`: An `int`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `input`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.sparse_merge.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.sparse_merge.md deleted file mode 100644 index 38742123d6..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.sparse_merge.md +++ /dev/null @@ -1,73 +0,0 @@ -### `tf.sparse_merge(sp_ids, sp_values, vocab_size, name=None)` {#sparse_merge} - -Combines a batch of feature ids and values into a single `SparseTensor`. - -The most common use case for this function occurs when feature ids and -their corresponding values are stored in `Example` protos on disk. -`parse_example` will return a batch of ids and a batch of values, and this -function joins them into a single logical `SparseTensor` for use in -functions such as `sparse_tensor_dense_matmul`, `sparse_to_dense`, etc. - -The `SparseTensor` returned by this function has the following properties: - - - `indices` is equivalent to `sp_ids.indices` with the last - dimension discarded and replaced with `sp_ids.values`. - - `values` is simply `sp_values.values`. - - If `sp_ids.shape = [D0, D1, ..., Dn, K]`, then - `output.shape = [D0, D1, ..., Dn, vocab_size]`. - -For example, consider the following feature vectors: - - vector1 = [-3, 0, 0, 0, 0, 0] - vector2 = [ 0, 1, 0, 4, 1, 0] - vector3 = [ 5, 0, 0, 9, 0, 0] - -These might be stored sparsely in the following Example protos by storing -only the feature ids (column number if the vectors are treated as a matrix) -of the non-zero elements and the corresponding values: - - examples = [Example(features={ - "ids": Feature(int64_list=Int64List(value=[0])), - "values": Feature(float_list=FloatList(value=[-3]))}), - Example(features={ - "ids": Feature(int64_list=Int64List(value=[1, 4, 3])), - "values": Feature(float_list=FloatList(value=[1, 1, 4]))}), - Example(features={ - "ids": Feature(int64_list=Int64List(value=[0, 3])), - "values": Feature(float_list=FloatList(value=[5, 9]))})] - -The result of calling parse_example on these examples will produce a -dictionary with entries for "ids" and "values". Passing those two objects -to this function along with vocab_size=6, will produce a `SparseTensor` that -sparsely represents all three instances. Namely, the `indices` property will -contain the coordinates of the non-zero entries in the feature matrix (the -first dimension is the row number in the matrix, i.e., the index within the -batch, and the second dimension is the column number, i.e., the feature id); -`values` will contain the actual values. `shape` will be the shape of the -original matrix, i.e., (3, 6). For our example above, the output will be -equal to: - - SparseTensor(indices=[[0, 0], [1, 1], [1, 3], [1, 4], [2, 0], [2, 3]], - values=[-3, 1, 4, 1, 5, 9], - shape=[3, 6]) - -##### Args: - - -* `sp_ids`: A `SparseTensor` with `values` property of type `int32` - or `int64`. -* `sp_values`: A`SparseTensor` of any type. -* `vocab_size`: A scalar `int64` Tensor (or Python int) containing the new size - of the last dimension, `all(0 <= sp_ids.values < vocab_size)`. -* `name`: A name prefix for the returned tensors (optional) - -##### Returns: - - A `SparseTensor` compactly representing a batch of feature ids and values, - useful for passing to functions that expect such a `SparseTensor`. - -##### Raises: - - -* `TypeError`: If `sp_ids` or `sp_values` are not a `SparseTensor`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.sparse_segment_sqrt_n_grad.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.sparse_segment_sqrt_n_grad.md deleted file mode 100644 index 2a2e0c9e33..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.sparse_segment_sqrt_n_grad.md +++ /dev/null @@ -1,24 +0,0 @@ -### `tf.sparse_segment_sqrt_n_grad(grad, indices, segment_ids, output_dim0, name=None)` {#sparse_segment_sqrt_n_grad} - -Computes gradients for SparseSegmentSqrtN. - -Returns tensor "output" with same shape as grad, except for dimension 0 whose -value is output_dim0. - -##### Args: - - -* `grad`: A `Tensor`. Must be one of the following types: `float32`, `float64`. - gradient propagated to the SparseSegmentSqrtN op. -* `indices`: A `Tensor` of type `int32`. - indices passed to the corresponding SparseSegmentSqrtN op. -* `segment_ids`: A `Tensor` of type `int32`. - segment_ids passed to the corresponding SparseSegmentSqrtN op. -* `output_dim0`: A `Tensor` of type `int32`. - dimension 0 of "data" passed to SparseSegmentSqrtN op. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `grad`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.sparse_to_dense.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.sparse_to_dense.md deleted file mode 100644 index d4df5a9183..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.sparse_to_dense.md +++ /dev/null @@ -1,45 +0,0 @@ -### `tf.sparse_to_dense(sparse_indices, output_shape, sparse_values, default_value=0, validate_indices=True, name=None)` {#sparse_to_dense} - -Converts a sparse representation into a dense tensor. - -Builds an array `dense` with shape `output_shape` such that - -```python -# If sparse_indices is scalar -dense[i] = (i == sparse_indices ? sparse_values : default_value) - -# If sparse_indices is a vector, then for each i -dense[sparse_indices[i]] = sparse_values[i] - -# If sparse_indices is an n by d matrix, then for each i in [0, n) -dense[sparse_indices[i][0], ..., sparse_indices[i][d-1]] = sparse_values[i] -``` - -All other values in `dense` are set to `default_value`. If `sparse_values` -is a scalar, all sparse indices are set to this single value. - -Indices should be sorted in lexicographic order, and indices must not -contain any repeats. If `validate_indices` is True, these properties -are checked during execution. - -##### Args: - - -* `sparse_indices`: A 0-D, 1-D, or 2-D `Tensor` of type `int32` or `int64`. - `sparse_indices[i]` contains the complete index where `sparse_values[i]` - will be placed. -* `output_shape`: A 1-D `Tensor` of the same type as `sparse_indices`. Shape - of the dense output tensor. -* `sparse_values`: A 0-D or 1-D `Tensor`. Values corresponding to each row of - `sparse_indices`, or a scalar value to be used for all sparse indices. -* `default_value`: A 0-D `Tensor` of the same type as `sparse_values`. Value - to set for indices not specified in `sparse_indices`. Defaults to zero. -* `validate_indices`: A boolean value. If True, indices are checked to make - sure they are sorted in lexicographic order and that there are no repeats. -* `name`: A name for the operation (optional). - -##### Returns: - - Dense `Tensor` of shape `output_shape`. Has the same type as - `sparse_values`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.squared_difference.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.squared_difference.md deleted file mode 100644 index d6bb175669..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.squared_difference.md +++ /dev/null @@ -1,15 +0,0 @@ -### `tf.squared_difference(x, y, name=None)` {#squared_difference} - -Returns (x - y)(x - y) element-wise. - -##### Args: - - -* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. -* `y`: A `Tensor`. Must have the same type as `x`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `x`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.squeeze.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.squeeze.md deleted file mode 100644 index e76c02e115..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.squeeze.md +++ /dev/null @@ -1,38 +0,0 @@ -### `tf.squeeze(input, squeeze_dims=None, name=None)` {#squeeze} - -Removes dimensions of size 1 from the shape of a tensor. - -Given a tensor `input`, this operation returns a tensor of the same type with -all dimensions of size 1 removed. If you don't want to remove all size 1 -dimensions, you can remove specific size 1 dimensions by specifying -`squeeze_dims`. - -For example: - -```prettyprint -# 't' is a tensor of shape [1, 2, 1, 3, 1, 1] -shape(squeeze(t)) ==> [2, 3] -``` - -Or, to remove specific size 1 dimensions: - -```prettyprint -# 't' is a tensor of shape [1, 2, 1, 3, 1, 1] -shape(squeeze(t, [2, 4])) ==> [1, 2, 3, 1] -``` - -##### Args: - - -* `input`: A `Tensor`. The `input` to squeeze. -* `squeeze_dims`: An optional list of `ints`. Defaults to `[]`. - If specified, only squeezes the dimensions listed. The dimension - index starts at 0. It is an error to squeeze a dimension that is not 1. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `input`. - Contains the same data as `input`, but has one or more dimensions of - size 1 removed. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.string_to_hash_bucket_fast.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.string_to_hash_bucket_fast.md new file mode 100644 index 0000000000..e684058326 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.string_to_hash_bucket_fast.md @@ -0,0 +1,23 @@ +### `tf.string_to_hash_bucket_fast(input, num_buckets, name=None)` {#string_to_hash_bucket_fast} + +Converts each string in the input Tensor to its hash mod by a number of buckets. + +The hash function is deterministic on the content of the string within the +process and will never change. However, it is not suitable for cryptography. +This function may be used when CPU time is scarce and inputs are trusted or +unimportant. There is a risk of adversaries constructing inputs that all hash +to the same bucket. To prevent this problem, use a strong hash function with +`tf.string_to_hash_bucket_strong`. + +##### Args: + + +* `input`: A `Tensor` of type `string`. The strings to assign a hash bucket. +* `num_buckets`: An `int` that is `>= 1`. The number of buckets. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `int64`. + A Tensor of the same shape as the input `string_tensor`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.string_to_number.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.string_to_number.md deleted file mode 100644 index dfbc0c6b6c..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.string_to_number.md +++ /dev/null @@ -1,20 +0,0 @@ -### `tf.string_to_number(string_tensor, out_type=None, name=None)` {#string_to_number} - -Converts each string in the input Tensor to the specified numeric type. - -(Note that int32 overflow results in an error while float overflow -results in a rounded value.) - -##### Args: - - -* `string_tensor`: A `Tensor` of type `string`. -* `out_type`: An optional `tf.DType` from: `tf.float32, tf.int32`. Defaults to `tf.float32`. - The numeric type to interpret each string in string_tensor as. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `out_type`. - A Tensor of the same shape as the input `string_tensor`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.tanh.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.tanh.md deleted file mode 100644 index b41e51c019..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.tanh.md +++ /dev/null @@ -1,16 +0,0 @@ -### `tf.tanh(x, name=None)` {#tanh} - -Computes hyperbolic tangent of `x` element-wise. - -##### Args: - - -* `x`: A Tensor with type `float`, `double`, `int32`, `complex64`, `int64`, - or `qint32`. -* `name`: A name for the operation (optional). - -##### Returns: - - A Tensor with the same type as `x` if `x.dtype != qint32` otherwise - the return type is `quint8`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.test.main.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.test.main.md new file mode 100644 index 0000000000..c7aa9cf801 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.test.main.md @@ -0,0 +1,4 @@ +### `tf.test.main()` {#main} + +Runs all unit tests. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.to_double.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.to_double.md new file mode 100644 index 0000000000..0cabea178e --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.to_double.md @@ -0,0 +1,19 @@ +### `tf.to_double(x, name='ToDouble')` {#to_double} + +Casts a tensor to type `float64`. + +##### Args: + + +* `x`: A `Tensor` or `SparseTensor`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` or `SparseTensor` with same shape as `x` with type `float64`. + +##### Raises: + + +* `TypeError`: If `x` cannot be cast to the `float64`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.AdadeltaOptimizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.AdadeltaOptimizer.md new file mode 100644 index 0000000000..9a14c50dc8 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.AdadeltaOptimizer.md @@ -0,0 +1,23 @@ +Optimizer that implements the Adadelta algorithm. + +See [M. D. Zeiler](http://arxiv.org/abs/1212.5701) +([pdf](http://arxiv.org/pdf/1212.5701v1.pdf)) + +- - - + +#### `tf.train.AdadeltaOptimizer.__init__(learning_rate=0.001, rho=0.95, epsilon=1e-08, use_locking=False, name='Adadelta')` {#AdadeltaOptimizer.__init__} + +Construct a new Adadelta optimizer. + +##### Args: + + +* `learning_rate`: A `Tensor` or a floating point value. The learning rate. +* `rho`: A `Tensor` or a floating point value. The decay rate. +* `epsilon`: A `Tensor` or a floating point value. A constant epsilon used + to better conditioning the grad update. +* `use_locking`: If `True` use locks for update operations. +* `name`: Optional name prefix for the operations created when applying + gradients. Defaults to "Adadelta". + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.GradientDescentOptimizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.GradientDescentOptimizer.md deleted file mode 100644 index 99a5f1f0b1..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.GradientDescentOptimizer.md +++ /dev/null @@ -1,18 +0,0 @@ -Optimizer that implements the gradient descent algorithm. - -- - - - -#### `tf.train.GradientDescentOptimizer.__init__(learning_rate, use_locking=False, name='GradientDescent')` {#GradientDescentOptimizer.__init__} - -Construct a new gradient descent optimizer. - -##### Args: - - -* `learning_rate`: A Tensor or a floating point value. The learning - rate to use. -* `use_locking`: If True use locks for update operations. -* `name`: Optional name prefix for the operations created when applying - gradients. Defaults to "GradientDescent". - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.LooperThread.loop.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.LooperThread.loop.md new file mode 100644 index 0000000000..6665ca7369 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.LooperThread.loop.md @@ -0,0 +1,22 @@ +#### `tf.train.LooperThread.loop(coord, timer_interval_secs, target, args=None, kwargs=None)` {#LooperThread.loop} + +Start a LooperThread that calls a function periodically. + +If `timer_interval_secs` is None the thread calls `target(args)` +repeatedly. Otherwise `target(args)` is called every `timer_interval_secs` +seconds. The thread terminates when a stop of the coordinator is +requested. + +##### Args: + + +* `coord`: A Coordinator. +* `timer_interval_secs`: Number. Time boundaries at which to call `target`. +* `target`: A callable object. +* `args`: Optional arguments to pass to `target` when calling it. +* `kwargs`: Optional keyword arguments to pass to `target` when calling it. + +##### Returns: + + The started thread. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.QueueRunner.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.QueueRunner.md new file mode 100644 index 0000000000..812dc2b5bd --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.QueueRunner.md @@ -0,0 +1,161 @@ +Holds a list of enqueue operations for a queue, each to be run in a thread. + +Queues are a convenient TensorFlow mechanism to compute tensors +asynchronously using multiple threads. For example in the canonical 'Input +Reader' setup one set of threads generates filenames in a queue; a second set +of threads read records from the files, processes them, and enqueues tensors +on a second queue; a third set of threads dequeues these input records to +construct batches and runs them through training operations. + +There are several delicate issues when running multiple threads that way: +closing the queues in sequence as the input is exhausted, correctly catching +and reporting exceptions, etc. + +The `QueueRunner`, combined with the `Coordinator`, helps handle these issues. +- - - + +#### `tf.train.QueueRunner.__init__(queue=None, enqueue_ops=None, close_op=None, cancel_op=None, queue_runner_def=None)` {#QueueRunner.__init__} + +Create a QueueRunner. + +On construction the `QueueRunner` adds an op to close the queue. That op +will be run if the enqueue ops raise exceptions. + +When you later call the `create_threads()` method, the `QueueRunner` will +create one thread for each op in `enqueue_ops`. Each thread will run its +enqueue op in parallel with the other threads. The enqueue ops do not have +to all be the same op, but it is expected that they all enqueue tensors in +`queue`. + +##### Args: + + +* `queue`: A `Queue`. +* `enqueue_ops`: List of enqueue ops to run in threads later. +* `close_op`: Op to close the queue. Pending enqueue ops are preserved. +* `cancel_op`: Op to close the queue and cancel pending enqueue ops. +* `queue_runner_def`: Optional `QueueRunnerDef` protocol buffer. If specified, + recreates the QueueRunner from its contents. `queue_runner_def` and the + other arguments are mutually exclusive. + +##### Raises: + + +* `ValueError`: If both `queue_runner_def` and `queue` are both specified. +* `ValueError`: If `queue` or `enqueue_ops` are not provided when not + restoring from `queue_runner_def`. + + +- - - + +#### `tf.train.QueueRunner.cancel_op` {#QueueRunner.cancel_op} + + + + +- - - + +#### `tf.train.QueueRunner.close_op` {#QueueRunner.close_op} + + + + +- - - + +#### `tf.train.QueueRunner.create_threads(sess, coord=None, daemon=False, start=False)` {#QueueRunner.create_threads} + +Create threads to run the enqueue ops. + +This method requires a session in which the graph was launched. It creates +a list of threads, optionally starting them. There is one thread for each +op passed in `enqueue_ops`. + +The `coord` argument is an optional coordinator, that the threads will use +to terminate together and report exceptions. If a coordinator is given, +this method starts an additional thread to close the queue when the +coordinator requests a stop. + +This method may be called again as long as all threads from a previous call +have stopped. + +##### Args: + + +* `sess`: A `Session`. +* `coord`: Optional `Coordinator` object for reporting errors and checking + stop conditions. +* `daemon`: Boolean. If `True` make the threads daemon threads. +* `start`: Boolean. If `True` starts the threads. If `False` the + caller must call the `start()` method of the returned threads. + +##### Returns: + + A list of threads. + +##### Raises: + + +* `RuntimeError`: If threads from a previous call to `create_threads()` are + still running. + + +- - - + +#### `tf.train.QueueRunner.enqueue_ops` {#QueueRunner.enqueue_ops} + + + + +- - - + +#### `tf.train.QueueRunner.exceptions_raised` {#QueueRunner.exceptions_raised} + +Exceptions raised but not handled by the `QueueRunner` threads. + +Exceptions raised in queue runner threads are handled in one of two ways +depending on whether or not a `Coordinator` was passed to +`create_threads()`: + +* With a `Coordinator`, exceptions are reported to the coordinator and + forgotten by the `QueueRunner`. +* Without a `Coordinator`, exceptions are captured by the `QueueRunner` and + made available in this `exceptions_raised` property. + +##### Returns: + + A list of Python `Exception` objects. The list is empty if no exception + was captured. (No exceptions are captured when using a Coordinator.) + + +- - - + +#### `tf.train.QueueRunner.from_proto(queue_runner_def)` {#QueueRunner.from_proto} + +Returns a `QueueRunner` object created from `queue_runner_def`. + + +- - - + +#### `tf.train.QueueRunner.name` {#QueueRunner.name} + +The string name of the underlying Queue. + + +- - - + +#### `tf.train.QueueRunner.queue` {#QueueRunner.queue} + + + + +- - - + +#### `tf.train.QueueRunner.to_proto()` {#QueueRunner.to_proto} + +Converts this `QueueRunner` to a `QueueRunnerDef` protocol buffer. + +##### Returns: + + A `QueueRunnerDef` protocol buffer. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.RMSPropOptimizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.RMSPropOptimizer.md new file mode 100644 index 0000000000..317f1e2adf --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.RMSPropOptimizer.md @@ -0,0 +1,23 @@ +Optimizer that implements the RMSProp algorithm. + +See the [paper] +(http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf). + +- - - + +#### `tf.train.RMSPropOptimizer.__init__(learning_rate, decay=0.9, momentum=0.0, epsilon=1e-10, use_locking=False, name='RMSProp')` {#RMSPropOptimizer.__init__} + +Construct a new RMSProp optimizer. + +##### Args: + + +* `learning_rate`: A Tensor or a floating point value. The learning rate. +* `decay`: Discounting factor for the history/coming gradient +* `momentum`: A scalar tensor. +* `epsilon`: Small value to avoid zero denominator. +* `use_locking`: If True use locks for update operation. +* `name`: Optional name prefix for the operations created when applying + gradients. Defaults to "RMSProp". + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.batch_join.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.batch_join.md deleted file mode 100644 index f6985b0a44..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.batch_join.md +++ /dev/null @@ -1,79 +0,0 @@ -### `tf.train.batch_join(tensors_list, batch_size, capacity=32, enqueue_many=False, shapes=None, dynamic_pad=False, shared_name=None, name=None)` {#batch_join} - -Runs a list of tensors to fill a queue to create batches of examples. - -The `tensors_list` argument is a list of tuples of tensors, or a list of -dictionaries of tensors. Each element in the list is treated similarily -to the `tensors` argument of `tf.train.batch()`. - -Enqueues a different list of tensors in different threads. -Implemented using a queue -- a `QueueRunner` for the queue -is added to the current `Graph`'s `QUEUE_RUNNER` collection. - -`len(tensors_list)` threads will be started, -with thread `i` enqueuing the tensors from -`tensors_list[i]`. `tensors_list[i1][j]` must match -`tensors_list[i2][j]` in type and shape, except in the first -dimension if `enqueue_many` is true. - -If `enqueue_many` is `False`, each `tensors_list[i]` is assumed -to represent a single example. An input tensor `x` will be output as a -tensor with shape `[batch_size] + x.shape`. - -If `enqueue_many` is `True`, `tensors_list[i]` is assumed to -represent a batch of examples, where the first dimension is indexed -by example, and all members of `tensors_list[i]` should have the -same size in the first dimension. The slices of any input tensor -`x` are treated as examples, and the output tensors will have shape -`[batch_size] + x.shape[1:]`. - -The `capacity` argument controls the how long the prefetching is allowed to -grow the queues. - -The returned operation is a dequeue operation and will throw -`tf.errors.OutOfRangeError` if the input queue is exhausted. If this -operation is feeding another input queue, its queue runner will catch -this exception, however, if this operation is used in your main thread -you are responsible for catching this yourself. - -*N.B.:* If `dynamic_pad` is `False`, you must ensure that either -(i) the `shapes` argument is passed, or (ii) all of the tensors in -`tensors_list` must have fully-defined shapes. `ValueError` will be -raised if neither of these conditions holds. - -If `dynamic_pad` is `True`, it is sufficient that the *rank* of the -tensors is known, but individual dimensions may have value `None`. -In this case, for each enqueue the dimensions with value `None` -may have a variable length; upon dequeue, the output tensors will be padded -on the right to the maximum shape of the tensors in the current minibatch. -For numbers, this padding takes value 0. For strings, this padding is -the empty string. See `PaddingFIFOQueue` for more info. - -##### Args: - - -* `tensors_list`: A list of tuples or dictionaries of tensors to enqueue. -* `batch_size`: An integer. The new batch size pulled from the queue. -* `capacity`: An integer. The maximum number of elements in the queue. -* `enqueue_many`: Whether each tensor in `tensor_list_list` is a single - example. -* `shapes`: (Optional) The shapes for each example. Defaults to the - inferred shapes for `tensor_list_list[i]`. -* `dynamic_pad`: Boolean. Allow variable dimensions in input shapes. - The given dimensions are padded upon dequeue so that tensors within a - batch have the same shapes. -* `shared_name`: (Optional) If set, this queue will be shared under the given - name across multiple sessions. -* `name`: (Optional) A name for the operations. - -##### Returns: - - A list or dictionary of tensors with the same number and types as - `tensors_list[i]`. - -##### Raises: - - -* `ValueError`: If the `shapes` are not specified, and cannot be - inferred from the elements of `tensor_list_list`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.generate_checkpoint_state_proto.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.generate_checkpoint_state_proto.md new file mode 100644 index 0000000000..7405b289e3 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.generate_checkpoint_state_proto.md @@ -0,0 +1,20 @@ +### `tf.train.generate_checkpoint_state_proto(save_dir, model_checkpoint_path, all_model_checkpoint_paths=None)` {#generate_checkpoint_state_proto} + +Generates a checkpoint state proto. + +##### Args: + + +* `save_dir`: Directory where the model was saved. +* `model_checkpoint_path`: The checkpoint file. +* `all_model_checkpoint_paths`: List of strings. Paths to all not-yet-deleted + checkpoints, sorted from oldest to newest. If this is a non-empty list, + the last element must be equal to model_checkpoint_path. These paths + are also saved in the CheckpointState proto. + +##### Returns: + + CheckpointState proto with model_checkpoint_path and + all_model_checkpoint_paths updated to either absolute paths or + relative paths to the current save_dir. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.limit_epochs.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.limit_epochs.md new file mode 100644 index 0000000000..ba3e710df4 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.limit_epochs.md @@ -0,0 +1,21 @@ +### `tf.train.limit_epochs(tensor, num_epochs=None, name=None)` {#limit_epochs} + +Returns tensor `num_epochs` times and then raises an `OutOfRange` error. + +##### Args: + + +* `tensor`: Any `Tensor`. +* `num_epochs`: A positive integer (optional). If specified, limits the number + of steps the output tensor may be evaluated. +* `name`: A name for the operations (optional). + +##### Returns: + + tensor or `OutOfRange`. + +##### Raises: + + +* `ValueError`: if `num_epochs` is invalid. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.truncated_normal.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.truncated_normal.md new file mode 100644 index 0000000000..9ae13882d3 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.truncated_normal.md @@ -0,0 +1,27 @@ +### `tf.truncated_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)` {#truncated_normal} + +Outputs random values from a truncated normal distribution. + +The generated values follow a normal distribution with specified mean and +standard deviation, except that values whose magnitude is more than 2 standard +deviations from the mean are dropped and re-picked. + +##### Args: + + +* `shape`: A 1-D integer Tensor or Python array. The shape of the output tensor. +* `mean`: A 0-D Tensor or Python value of type `dtype`. The mean of the + truncated normal distribution. +* `stddev`: A 0-D Tensor or Python value of type `dtype`. The standard deviation + of the truncated normal distribution. +* `dtype`: The type of the output. +* `seed`: A Python integer. Used to create a random seed for the distribution. + See + [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) + for behavior. +* `name`: A name for the operation (optional). + +##### Returns: + + A tensor of the specified shape filled with random truncated normal values. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.unique_with_counts.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.unique_with_counts.md deleted file mode 100644 index 2d3d32d970..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.unique_with_counts.md +++ /dev/null @@ -1,36 +0,0 @@ -### `tf.unique_with_counts(x, name=None)` {#unique_with_counts} - -Finds unique elements in a 1-D tensor. - -This operation returns a tensor `y` containing all of the unique elements of `x` -sorted in the same order that they occur in `x`. This operation also returns a -tensor `idx` the same size as `x` that contains the index of each value of `x` -in the unique output `y`. Finally, it returns a third tensor `count` that -contains the count of each element of `y` in `x`. In other words: - -`y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]` - -For example: - -```prettyprint -# tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8] -y, idx, count = unique_with_counts(x) -y ==> [1, 2, 4, 7, 8] -idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4] -count ==> [2, 1, 3, 1, 2] -``` - -##### Args: - - -* `x`: A `Tensor`. 1-D. -* `name`: A name for the operation (optional). - -##### Returns: - - A tuple of `Tensor` objects (y, idx, count). - -* `y`: A `Tensor`. Has the same type as `x`. 1-D. -* `idx`: A `Tensor` of type `int32`. 1-D. -* `count`: A `Tensor` of type `int32`. 1-D. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.variable_axis_size_partitioner.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.variable_axis_size_partitioner.md new file mode 100644 index 0000000000..5d8822e83c --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.variable_axis_size_partitioner.md @@ -0,0 +1,37 @@ +### `tf.variable_axis_size_partitioner(max_shard_bytes, axis=0, bytes_per_string_element=16, max_shards=None)` {#variable_axis_size_partitioner} + +Get a partitioner for VariableScope to keep shards below `max_shard_bytes`. + +This partitioner will shard a Variable along one axis, attempting to keep +the maximum shard size below `max_shard_bytes`. In practice, this is not +always possible when sharding along only one axis. When this happens, +this axis is sharded as much as possible (i.e., every dimension becomes +a separate shard). + +If the partitioner hits the `max_shards` limit, then each shard may end up +larger than `max_shard_bytes`. By default `max_shards` equals `None` and no +limit on the number of shards is enforced. + +One reasonable value for `max_shard_bytes` is `(64 << 20) - 1`, or almost +`64MB`, to keep below the protobuf byte limit. + +##### Args: + + +* `max_shard_bytes`: The maximum size any given shard is allowed to be. +* `axis`: The axis to partition along. Default: outermost axis. +* `bytes_per_string_element`: If the `Variable` is of type string, this provides + an estimate of how large each scalar in the `Variable` is. +* `max_shards`: The maximum number of shards in int created taking precedence + over `max_shard_bytes`. + +##### Returns: + + A partition function usable as the `partitioner` argument to + `variable_scope`, `get_variable`, and `get_partitioned_variable_list`. + +##### Raises: + + +* `ValueError`: If any of the byte counts are non-positive. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.variable_scope.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.variable_scope.md deleted file mode 100644 index 86d4684b72..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.variable_scope.md +++ /dev/null @@ -1,82 +0,0 @@ -### `tf.variable_scope(name_or_scope, reuse=None, initializer=None, regularizer=None, caching_device=None, partitioner=None)` {#variable_scope} - -Returns a context for variable scope. - -Variable scope allows to create new variables and to share already created -ones while providing checks to not create or share by accident. For details, -see the [Variable Scope How To](../../how_tos/variable_scope/index.md), -here we present only a few basic examples. - -Simple example of how to create a new variable: - -```python -with tf.variable_scope("foo"): - with tf.variable_scope("bar"): - v = tf.get_variable("v", [1]) - assert v.name == "foo/bar/v:0" -``` - -Basic example of sharing a variable: - -```python -with tf.variable_scope("foo"): - v = tf.get_variable("v", [1]) -with tf.variable_scope("foo", reuse=True): - v1 = tf.get_variable("v", [1]) -assert v1 == v -``` - -Sharing a variable by capturing a scope and setting reuse: - -```python -with tf.variable_scope("foo") as scope: - v = tf.get_variable("v", [1]) - scope.reuse_variables() - v1 = tf.get_variable("v", [1]) -assert v1 == v -``` - -To prevent accidental sharing of variables, we raise an exception when -getting an existing variable in a non-reusing scope. - -```python -with tf.variable_scope("foo"): - v = tf.get_variable("v", [1]) - v1 = tf.get_variable("v", [1]) - # Raises ValueError("... v already exists ..."). -``` - -Similarly, we raise an exception when trying to get a variable that -does not exist in reuse mode. - -```python -with tf.variable_scope("foo", reuse=True): - v = tf.get_variable("v", [1]) - # Raises ValueError("... v does not exists ..."). -``` - -Note that the `reuse` flag is inherited: if we open a reusing scope, -then all its sub-scopes become reusing as well. - -##### Args: - - -* `name_or_scope`: `string` or `VariableScope`: the scope to open. -* `reuse`: `True` or `None`; if `True`, we go into reuse mode for this scope as - well as all sub-scopes; if `None`, we just inherit the parent scope reuse. -* `initializer`: default initializer for variables within this scope. -* `regularizer`: default regularizer for variables within this scope. -* `caching_device`: default caching device for variables within this scope. -* `partitioner`: default partitioner for variables within this scope. - -##### Returns: - - A scope that can be to captured and reused. - -##### Raises: - - -* `ValueError`: when trying to reuse within a create scope, or create within - a reuse scope, or if reuse is not `None` or `True`. -* `TypeError`: when the types of some arguments are not appropriate. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.zeros_like.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.zeros_like.md deleted file mode 100644 index 9017e14287..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.zeros_like.md +++ /dev/null @@ -1,28 +0,0 @@ -### `tf.zeros_like(tensor, dtype=None, name=None)` {#zeros_like} - -Creates a tensor with all elements set to zero. - -Given a single tensor (`tensor`), this operation returns a tensor of the -same type and shape as `tensor` with all elements set to zero. Optionally, -you can use `dtype` to specify a new type for the returned tensor. - -For example: - -```python -# 'tensor' is [[1, 2, 3], [4, 5, 6]] -tf.zeros_like(tensor) ==> [[0, 0, 0], [0, 0, 0]] -``` - -##### Args: - - -* `tensor`: A `Tensor`. -* `dtype`: A type for the returned `Tensor`. Must be `float32`, `float64`, - `int8`, `int16`, `int32`, `int64`, `uint8`, or `complex64`. - -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` with all elements set to zero. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.Assert.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.Assert.md deleted file mode 100644 index 6471b9aea4..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.Assert.md +++ /dev/null @@ -1,29 +0,0 @@ -### `tf.Assert(condition, data, summarize=None, name=None)` {#Assert} - -Asserts that the given condition is true. - -If `condition` evaluates to false, print the list of tensors in `data`. -`summarize` determines how many entries of the tensors to print. - -NOTE: To ensure that Assert executes, one usually attaches a dependency: - -```python - # Ensure maximum element of x is smaller or equal to 1 -assert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x]) -x = tf.with_dependencies([assert_op], x) -``` - -##### Args: - - -* `condition`: The condition to evaluate. -* `data`: The tensors to print out when condition is false. -* `summarize`: Print this many entries of each tensor. -* `name`: A name for this operation (optional). - -##### Returns: - - -* `assert_op`: An `Operation` that, when executed, raises a - `tf.errors.InvalidArgumentError` if `condition` is not true. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.QueueBase.from_list.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.QueueBase.from_list.md deleted file mode 100644 index d9a2e7c71f..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.QueueBase.from_list.md +++ /dev/null @@ -1,21 +0,0 @@ -#### `tf.QueueBase.from_list(index, queues)` {#QueueBase.from_list} - -Create a queue using the queue reference from `queues[index]`. - -##### Args: - - -* `index`: An integer scalar tensor that determines the input that gets - selected. -* `queues`: A list of `QueueBase` objects. - -##### Returns: - - A `QueueBase` object. - -##### Raises: - - -* `TypeError`: When `queues` is not a list of `QueueBase` objects, - or when the data types of `queues` are not all the same. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.QueueBase.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.QueueBase.md new file mode 100644 index 0000000000..35373f6edc --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.QueueBase.md @@ -0,0 +1,268 @@ +Base class for queue implementations. + +A queue is a TensorFlow data structure that stores tensors across +multiple steps, and exposes operations that enqueue and dequeue +tensors. + +Each queue element is a tuple of one or more tensors, where each +tuple component has a static dtype, and may have a static shape. The +queue implementations support versions of enqueue and dequeue that +handle single elements, versions that support enqueuing and +dequeuing a batch of elements at once. + +See [`tf.FIFOQueue`](#FIFOQueue) and +[`tf.RandomShuffleQueue`](#RandomShuffleQueue) for concrete +implementations of this class, and instructions on how to create +them. + +- - - + +#### `tf.QueueBase.enqueue(vals, name=None)` {#QueueBase.enqueue} + +Enqueues one element to this queue. + +If the queue is full when this operation executes, it will block +until the element has been enqueued. + +##### Args: + + +* `vals`: A tensor, a list or tuple of tensors, or a dictionary containing + the values to enqueue. +* `name`: A name for the operation (optional). + +##### Returns: + + The operation that enqueues a new tuple of tensors to the queue. + + +- - - + +#### `tf.QueueBase.enqueue_many(vals, name=None)` {#QueueBase.enqueue_many} + +Enqueues zero or more elements to this queue. + +This operation slices each component tensor along the 0th dimension to +make multiple queue elements. All of the tensors in `vals` must have the +same size in the 0th dimension. + +If the queue is full when this operation executes, it will block +until all of the elements have been enqueued. + +##### Args: + + +* `vals`: A tensor, a list or tuple of tensors, or a dictionary + from which the queue elements are taken. +* `name`: A name for the operation (optional). + +##### Returns: + + The operation that enqueues a batch of tuples of tensors to the queue. + + + +- - - + +#### `tf.QueueBase.dequeue(name=None)` {#QueueBase.dequeue} + +Dequeues one element from this queue. + +If the queue is empty when this operation executes, it will block +until there is an element to dequeue. + +##### Args: + + +* `name`: A name for the operation (optional). + +##### Returns: + + The tuple of tensors that was dequeued. + + +- - - + +#### `tf.QueueBase.dequeue_many(n, name=None)` {#QueueBase.dequeue_many} + +Dequeues and concatenates `n` elements from this queue. + +This operation concatenates queue-element component tensors along +the 0th dimension to make a single component tensor. All of the +components in the dequeued tuple will have size `n` in the 0th dimension. + +If the queue is closed and there are less than `n` elements left, then an +`OutOfRange` exception is raised. + +##### Args: + + +* `n`: A scalar `Tensor` containing the number of elements to dequeue. +* `name`: A name for the operation (optional). + +##### Returns: + + The tuple of concatenated tensors that was dequeued. + + + +- - - + +#### `tf.QueueBase.size(name=None)` {#QueueBase.size} + +Compute the number of elements in this queue. + +##### Args: + + +* `name`: A name for the operation (optional). + +##### Returns: + + A scalar tensor containing the number of elements in this queue. + + + +- - - + +#### `tf.QueueBase.close(cancel_pending_enqueues=False, name=None)` {#QueueBase.close} + +Closes this queue. + +This operation signals that no more elements will be enqueued in +the given queue. Subsequent `enqueue` and `enqueue_many` +operations will fail. Subsequent `dequeue` and `dequeue_many` +operations will continue to succeed if sufficient elements remain +in the queue. Subsequent `dequeue` and `dequeue_many` operations +that would block will fail immediately. + +If `cancel_pending_enqueues` is `True`, all pending requests will also +be cancelled. + +##### Args: + + +* `cancel_pending_enqueues`: (Optional.) A boolean, defaulting to + `False` (described above). +* `name`: A name for the operation (optional). + +##### Returns: + + The operation that closes the queue. + + + +#### Other Methods +- - - + +#### `tf.QueueBase.__init__(dtypes, shapes, names, queue_ref)` {#QueueBase.__init__} + +Constructs a queue object from a queue reference. + +The two optional lists, `shapes` and `names`, must be of the same length +as `dtypes` if provided. The values at a given index `i` indicate the +shape and name to use for the corresponding queue component in `dtypes`. + +##### Args: + + +* `dtypes`: A list of types. The length of dtypes must equal the number + of tensors in each element. +* `shapes`: Constraints on the shapes of tensors in an element: + A list of shape tuples or None. This list is the same length + as dtypes. If the shape of any tensors in the element are constrained, + all must be; shapes can be None if the shapes should not be constrained. +* `names`: Optional list of names. If provided, the `enqueue()` and + `dequeue()` methods will use dictionaries with these names as keys. + Must be None or a list or tuple of the same length as `dtypes`. +* `queue_ref`: The queue reference, i.e. the output of the queue op. + +##### Raises: + + +* `ValueError`: If one of the arguments is invalid. + + +- - - + +#### `tf.QueueBase.dequeue_up_to(n, name=None)` {#QueueBase.dequeue_up_to} + +Dequeues and concatenates `n` elements from this queue. + +**Note** This operation is not supported by all queues. If a queue does not +support DequeueUpTo, then an Unimplemented exception is raised. + +This operation concatenates queue-element component tensors along the +0th dimension to make a single component tensor. All of the components +in the dequeued tuple will have size `n` in the 0th dimension. + +If the queue is closed and there are more than `0` but less than `n` +elements remaining, then instead of raising an `OutOfRange` exception like +`dequeue_many`, the remaining elements are returned immediately. +If the queue is closed and there are `0` elements left in the queue, then +an `OutOfRange` exception is raised just like in `dequeue_many`. +Otherwise the behavior is identical to `dequeue_many`: + +##### Args: + + +* `n`: A scalar `Tensor` containing the number of elements to dequeue. +* `name`: A name for the operation (optional). + +##### Returns: + + The tuple of concatenated tensors that was dequeued. + + +- - - + +#### `tf.QueueBase.dtypes` {#QueueBase.dtypes} + +The list of dtypes for each component of a queue element. + + +- - - + +#### `tf.QueueBase.from_list(index, queues)` {#QueueBase.from_list} + +Create a queue using the queue reference from `queues[index]`. + +##### Args: + + +* `index`: An integer scalar tensor that determines the input that gets + selected. +* `queues`: A list of `QueueBase` objects. + +##### Returns: + + A `QueueBase` object. + +##### Raises: + + +* `TypeError`: When `queues` is not a list of `QueueBase` objects, + or when the data types of `queues` are not all the same. + + +- - - + +#### `tf.QueueBase.name` {#QueueBase.name} + +The name of the underlying queue. + + +- - - + +#### `tf.QueueBase.names` {#QueueBase.names} + +The list of names for each component of a queue element. + + +- - - + +#### `tf.QueueBase.queue_ref` {#QueueBase.queue_ref} + +The underlying queue reference. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.RandomShuffleQueue.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.RandomShuffleQueue.md new file mode 100644 index 0000000000..cd617e7578 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.RandomShuffleQueue.md @@ -0,0 +1,54 @@ +A queue implementation that dequeues elements in a random order. + +See [`tf.QueueBase`](#QueueBase) for a description of the methods on +this class. + +- - - + +#### `tf.RandomShuffleQueue.__init__(capacity, min_after_dequeue, dtypes, shapes=None, names=None, seed=None, shared_name=None, name='random_shuffle_queue')` {#RandomShuffleQueue.__init__} + +Create a queue that dequeues elements in a random order. + +A `RandomShuffleQueue` has bounded capacity; supports multiple +concurrent producers and consumers; and provides exactly-once +delivery. + +A `RandomShuffleQueue` holds a list of up to `capacity` +elements. Each element is a fixed-length tuple of tensors whose +dtypes are described by `dtypes`, and whose shapes are optionally +described by the `shapes` argument. + +If the `shapes` argument is specified, each component of a queue +element must have the respective fixed shape. If it is +unspecified, different queue elements may have different shapes, +but the use of `dequeue_many` is disallowed. + +The `min_after_dequeue` argument allows the caller to specify a +minimum number of elements that will remain in the queue after a +`dequeue` or `dequeue_many` operation completes, to ensure a +minimum level of mixing of elements. This invariant is maintained +by blocking those operations until sufficient elements have been +enqueued. The `min_after_dequeue` argument is ignored after the +queue has been closed. + +##### Args: + + +* `capacity`: An integer. The upper bound on the number of elements + that may be stored in this queue. +* `min_after_dequeue`: An integer (described above). +* `dtypes`: A list of `DType` objects. The length of `dtypes` must equal + the number of tensors in each queue element. +* `shapes`: (Optional.) A list of fully-defined `TensorShape` objects + with the same length as `dtypes`, or `None`. +* `names`: (Optional.) A list of string naming the components in the queue + with the same length as `dtypes`, or `None`. If specified the dequeue + methods return a dictionary with the names as keys. +* `seed`: A Python integer. Used to create a random seed. See + [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) + for behavior. +* `shared_name`: (Optional.) If non-empty, this queue will be shared under + the given name across multiple sessions. +* `name`: Optional name for the queue operation. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.TextLineReader.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.TextLineReader.md deleted file mode 100644 index ebb023a2fa..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.TextLineReader.md +++ /dev/null @@ -1,148 +0,0 @@ -A Reader that outputs the lines of a file delimited by newlines. - -Newlines are stripped from the output. -See ReaderBase for supported methods. -- - - - -#### `tf.TextLineReader.__init__(skip_header_lines=None, name=None)` {#TextLineReader.__init__} - -Create a TextLineReader. - -##### Args: - - -* `skip_header_lines`: An optional int. Defaults to 0. Number of lines - to skip from the beginning of every file. -* `name`: A name for the operation (optional). - - -- - - - -#### `tf.TextLineReader.num_records_produced(name=None)` {#TextLineReader.num_records_produced} - -Returns the number of records this reader has produced. - -This is the same as the number of Read executions that have -succeeded. - -##### Args: - - -* `name`: A name for the operation (optional). - -##### Returns: - - An int64 Tensor. - - -- - - - -#### `tf.TextLineReader.num_work_units_completed(name=None)` {#TextLineReader.num_work_units_completed} - -Returns the number of work units this reader has finished processing. - -##### Args: - - -* `name`: A name for the operation (optional). - -##### Returns: - - An int64 Tensor. - - -- - - - -#### `tf.TextLineReader.read(queue, name=None)` {#TextLineReader.read} - -Returns the next record (key, value pair) produced by a reader. - -Will dequeue a work unit from queue if necessary (e.g. when the -Reader needs to start reading from a new file since it has -finished with the previous file). - -##### Args: - - -* `queue`: A Queue or a mutable string Tensor representing a handle - to a Queue, with string work items. -* `name`: A name for the operation (optional). - -##### Returns: - - A tuple of Tensors (key, value). - -* `key`: A string scalar Tensor. -* `value`: A string scalar Tensor. - - -- - - - -#### `tf.TextLineReader.reader_ref` {#TextLineReader.reader_ref} - -Op that implements the reader. - - -- - - - -#### `tf.TextLineReader.reset(name=None)` {#TextLineReader.reset} - -Restore a reader to its initial clean state. - -##### Args: - - -* `name`: A name for the operation (optional). - -##### Returns: - - The created Operation. - - -- - - - -#### `tf.TextLineReader.restore_state(state, name=None)` {#TextLineReader.restore_state} - -Restore a reader to a previously saved state. - -Not all Readers support being restored, so this can produce an -Unimplemented error. - -##### Args: - - -* `state`: A string Tensor. - Result of a SerializeState of a Reader with matching type. -* `name`: A name for the operation (optional). - -##### Returns: - - The created Operation. - - -- - - - -#### `tf.TextLineReader.serialize_state(name=None)` {#TextLineReader.serialize_state} - -Produce a string tensor that encodes the state of a reader. - -Not all Readers support being serialized, so this can produce an -Unimplemented error. - -##### Args: - - -* `name`: A name for the operation (optional). - -##### Returns: - - A string Tensor. - - -- - - - -#### `tf.TextLineReader.supports_serialize` {#TextLineReader.supports_serialize} - -Whether the Reader implementation can serialize its state. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.VariableScope.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.VariableScope.md new file mode 100644 index 0000000000..04fdca9bdf --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.VariableScope.md @@ -0,0 +1,105 @@ +Variable scope object to carry defaults to provide to get_variable. + +Many of the arguments we need for get_variable in a variable store are most +easily handled with a context. This object is used for the defaults. + +Attributes: + name: name of the current scope, used as prefix in get_variable. + initializer: default initializer passed to get_variable. + regularizer: default regularizer passed to get_variable. + reuse: Boolean or None, setting the reuse in get_variable. + caching_device: string, callable, or None: the caching device passed to + get_variable. + partitioner: callable or `None`: the partitioner passed to `get_variable`. + name_scope: The name passed to `tf.name_scope`. +- - - + +#### `tf.VariableScope.__init__(reuse, name='', initializer=None, regularizer=None, caching_device=None, partitioner=None, name_scope='')` {#VariableScope.__init__} + +Creates a new VariableScope with the given properties. + + +- - - + +#### `tf.VariableScope.caching_device` {#VariableScope.caching_device} + + + + +- - - + +#### `tf.VariableScope.get_variable(var_store, name, shape=None, dtype=tf.float32, initializer=None, regularizer=None, trainable=True, collections=None, caching_device=None, partitioner=None, validate_shape=True)` {#VariableScope.get_variable} + +Gets an existing variable with this name or create a new one. + + +- - - + +#### `tf.VariableScope.initializer` {#VariableScope.initializer} + + + + +- - - + +#### `tf.VariableScope.name` {#VariableScope.name} + + + + +- - - + +#### `tf.VariableScope.partitioner` {#VariableScope.partitioner} + + + + +- - - + +#### `tf.VariableScope.regularizer` {#VariableScope.regularizer} + + + + +- - - + +#### `tf.VariableScope.reuse` {#VariableScope.reuse} + + + + +- - - + +#### `tf.VariableScope.reuse_variables()` {#VariableScope.reuse_variables} + +Reuse variables in this scope. + + +- - - + +#### `tf.VariableScope.set_caching_device(caching_device)` {#VariableScope.set_caching_device} + +Set caching_device for this scope. + + +- - - + +#### `tf.VariableScope.set_initializer(initializer)` {#VariableScope.set_initializer} + +Set initializer for this scope. + + +- - - + +#### `tf.VariableScope.set_partitioner(partitioner)` {#VariableScope.set_partitioner} + +Set partitioner for this scope. + + +- - - + +#### `tf.VariableScope.set_regularizer(regularizer)` {#VariableScope.set_regularizer} + +Set regularizer for this scope. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.assert_negative.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.assert_negative.md new file mode 100644 index 0000000000..81daebec0d --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.assert_negative.md @@ -0,0 +1,33 @@ +### `tf.assert_negative(x, data=None, summarize=None, name=None)` {#assert_negative} + +Assert the condition `x < 0` holds element-wise. + +Example of adding a dependency to an operation: + +```python +with tf.control_dependencies([tf.assert_negative(x)]): + output = tf.reduce_sum(x) +``` + +Example of adding dependency to the tensor being checked: + +```python +x = tf.with_dependencies([tf.assert_negative(x)], x) +``` + +Negative means, for every element `x[i]` of `x`, we have `x[i] < 0`. +If `x` is empty this is trivially satisfied. + +##### Args: + + +* `x`: Numeric `Tensor`. +* `data`: The tensors to print out if the condition is False. Defaults to + error message and first few entries of `x`. +* `summarize`: Print this many entries of each tensor. +* `name`: A name for this operation (optional). Defaults to "assert_negative". + +##### Returns: + + Op raising `InvalidArgumentError` unless `x` is all negative. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.assert_rank.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.assert_rank.md deleted file mode 100644 index e8da009641..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.assert_rank.md +++ /dev/null @@ -1,36 +0,0 @@ -### `tf.assert_rank(x, rank, data=None, summarize=None, name=None)` {#assert_rank} - -Assert `x` has rank equal to `rank`. - -Example of adding a dependency to an operation: - -```python -with tf.control_dependencies([tf.assert_rank(x, 2)]): - output = tf.reduce_sum(x) -``` - -Example of adding dependency to the tensor being checked: - -```python -x = tf.with_dependencies([tf.assert_rank(x, 2)], x) -``` - -##### Args: - - -* `x`: Numeric `Tensor`. -* `rank`: Scalar integer `Tensor`. -* `data`: The tensors to print out if the condition is False. Defaults to - error message and first few entries of `x`. -* `summarize`: Print this many entries of each tensor. -* `name`: A name for this operation (optional). Defaults to "assert_rank". - -##### Returns: - - Op raising `InvalidArgumentError` unless `x` has specified rank. - -##### Raises: - - -* `ValueError`: If static checks determine `x` has wrong rank. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.batch_matrix_determinant.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.batch_matrix_determinant.md deleted file mode 100644 index 83f9503a4d..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.batch_matrix_determinant.md +++ /dev/null @@ -1,19 +0,0 @@ -### `tf.batch_matrix_determinant(input, name=None)` {#batch_matrix_determinant} - -Calculates the determinants for a batch of square matrices. - -The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions -form square matrices. The output is a 1-D tensor containing the determinants -for all input submatrices `[..., :, :]`. - -##### Args: - - -* `input`: A `Tensor`. Must be one of the following types: `float32`, `float64`. - Shape is `[..., M, M]`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `input`. Shape is `[...]`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.batch_matrix_diag.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.batch_matrix_diag.md new file mode 100644 index 0000000000..6e5458ba6c --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.batch_matrix_diag.md @@ -0,0 +1,42 @@ +### `tf.batch_matrix_diag(diagonal, name=None)` {#batch_matrix_diag} + +Returns a batched diagonal tensor with a given batched diagonal values. + +Given a `diagonal`, this operation returns a tensor with the `diagonal` and +everything else padded with zeros. The diagonal is computed as follows: + +Assume `diagonal` has `k` dimensions `[I, J, K, ..., N]`, then the output is a +tensor of rank `k+1` with dimensions [I, J, K, ..., N, N]` where: + +`output[i, j, k, ..., m, n] = 1{m=n} * diagonal[i, j, k, ..., n]`. + +For example: + +```prettyprint +# 'diagonal' is [[1, 2, 3, 4], [5, 6, 7, 8]] + +and diagonal.shape = (2, 4) + +tf.batch_matrix_diag(diagonal) ==> [[[1, 0, 0, 0] + [0, 2, 0, 0] + [0, 0, 3, 0] + [0, 0, 0, 4]], + [[5, 0, 0, 0] + [0, 6, 0, 0] + [0, 0, 7, 0] + [0, 0, 0, 8]]] + +which has shape (2, 4, 4) +``` + +##### Args: + + +* `diagonal`: A `Tensor`. Rank `k`, where `k >= 1`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `diagonal`. + Rank `k+1`, with `output.shape = diagonal.shape + [diagonal.shape[-1]]`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.batch_matrix_solve.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.batch_matrix_solve.md deleted file mode 100644 index f75ea79bc5..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.batch_matrix_solve.md +++ /dev/null @@ -1,27 +0,0 @@ -### `tf.batch_matrix_solve(matrix, rhs, adjoint=None, name=None)` {#batch_matrix_solve} - -Solves systems of linear equations. Checks for invertibility. - -Matrix is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions -form square matrices. Rhs is a tensor of shape -`[..., M, K]`. The output is a tensor shape `[..., M, K]`. If `adjoint` is `False` then each output -matrix satisfies `matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]`. -If `adjoint` is `True` then each output -matrix satisfies `adjoint(matrix[..., :, :]) * output[..., :, :] = rhs[..., :, :]`. - -##### Args: - - -* `matrix`: A `Tensor`. Must be one of the following types: `float64`, `float32`. - Shape is `[..., M, M]`. -* `rhs`: A `Tensor`. Must have the same type as `matrix`. - Shape is `[..., M, K]`. -* `adjoint`: An optional `bool`. Defaults to `False`. - Boolean indicating whether to solve with `matrix` or its (block-wise) - adjoint. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `matrix`. Shape is `[..., M, K]`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.batch_matrix_solve_ls.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.batch_matrix_solve_ls.md new file mode 100644 index 0000000000..2b33669fa2 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.batch_matrix_solve_ls.md @@ -0,0 +1,56 @@ +### `tf.batch_matrix_solve_ls(matrix, rhs, l2_regularizer=0.0, fast=True, name=None)` {#batch_matrix_solve_ls} + +Solves multiple linear least-squares problems. + +`matrix` is a tensor of shape `[..., M, N]` whose inner-most 2 dimensions +form `M`-by-`N` matrices. Rhs is a tensor of shape `[..., M, K]` whose +inner-most 2 dimensions form `M`-by-`K` matrices. The computed output is a +`Tensor` of shape `[..., N, K]` whose inner-most 2 dimensions form `M`-by-`K` +matrices that solve the equations +`matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]` in the least squares +sense. + +Below we will use the following notation for each pair of +matrix and right-hand sides in the batch: + +`matrix`=\\(A \in \Re^{m \times n}\\), +`rhs`=\\(B \in \Re^{m \times k}\\), +`output`=\\(X \in \Re^{n \times k}\\), +`l2_regularizer`=\\(\lambda\\). + +If `fast` is `True`, then the solution is computed by solving the normal +equations using Cholesky decomposition. Specifically, if \\(m \ge n\\) then +\\(X = (A^T A + \lambda I)^{-1} A^T B\\), which solves the least-squares +problem \\(X = \mathrm{argmin}_{Z \in \Re^{n \times k}} ||A Z - B||_F^2 + +\lambda ||Z||_F^2\\). If \\(m \lt n\\) then `output` is computed as +\\(X = A^T (A A^T + \lambda I)^{-1} B\\), which (for \\(\lambda = 0\\)) is +the minimum-norm solution to the under-determined linear system, i.e. +\\(X = \mathrm{argmin}_{Z \in \Re^{n \times k}} ||Z||_F^2 \\), subject to +\\(A Z = B\\). Notice that the fast path is only numerically stable when +\\(A\\) is numerically full rank and has a condition number +\\(\mathrm{cond}(A) \lt \frac{1}{\sqrt{\epsilon_{mach}}}\\) or\\(\lambda\\) +is sufficiently large. + +If `fast` is `False` an algorithm based on the numerically robust complete +orthogonal decomposition is used. This computes the minimum-norm +least-squares solution, even when \\(A\\) is rank deficient. This path is +typically 6-7 times slower than the fast path. If `fast` is `False` then +`l2_regularizer` is ignored. + +##### Args: + + +* `matrix`: `Tensor` of shape `[..., M, N]`. +* `rhs`: `Tensor` of shape `[..., M, K]`. +* `l2_regularizer`: 0-D `double` `Tensor`. Ignored if `fast=False`. +* `fast`: bool. Defaults to `True`. +* `name`: string, optional name of the operation. + +##### Returns: + + +* `output`: `Tensor` of shape `[..., N, K]` whose inner-most 2 dimensions form + `M`-by-`K` matrices that solve the equations + `matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]` in the least + squares sense. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.batch_self_adjoint_eig.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.batch_self_adjoint_eig.md new file mode 100644 index 0000000000..19d6c5319f --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.batch_self_adjoint_eig.md @@ -0,0 +1,22 @@ +### `tf.batch_self_adjoint_eig(input, name=None)` {#batch_self_adjoint_eig} + +Calculates the Eigen Decomposition of a batch of square self-adjoint matrices. + +The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions +form square matrices, with the same constraints as the single matrix +SelfAdjointEig. + +The result is a '[..., M+1, M] matrix with [..., 0,:] containing the +eigenvalues, and subsequent [...,1:, :] containing the eigenvectors. + +##### Args: + + +* `input`: A `Tensor`. Must be one of the following types: `float64`, `float32`. + Shape is `[..., M, M]`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `input`. Shape is `[..., M+1, M]`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.cholesky_solve.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.cholesky_solve.md deleted file mode 100644 index 7445d3f929..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.cholesky_solve.md +++ /dev/null @@ -1,35 +0,0 @@ -### `tf.cholesky_solve(chol, rhs, name=None)` {#cholesky_solve} - -Solve linear equations `A X = RHS`, given Cholesky factorization of `A`. - -```python -# Solve one system of linear equations (K = 1). -A = [[3, 1], [1, 3]] -RHS = [[2], [22]] # shape 2 x 1 -chol = tf.cholesky(A) -X = tf.cholesky_solve(chol, RHS) -# tf.matmul(A, X) ~ RHS -X[:, 0] # Solution to the linear system A x = RHS[:, 0] - -# Solve five systems of linear equations (K = 5). -A = [[3, 1], [1, 3]] -RHS = [[1, 2, 3, 4, 5], [11, 22, 33, 44, 55]] # shape 2 x 5 -... -X[:, 2] # Solution to the linear system A x = RHS[:, 2] -``` - -##### Args: - - -* `chol`: A `Tensor`. Must be `float32` or `float64`, shape is `[M, M]`. - Cholesky factorization of `A`, e.g. `chol = tf.cholesky(A)`. For that - reason, only the lower triangular part (including the diagonal) of `chol` - is used. The strictly upper part is assumed to be zero and not accessed. -* `rhs`: A `Tensor`, same type as `chol`, shape is `[M, K]`, designating `K` - systems of linear equations. -* `name`: A name to give this `Op`. Defaults to `cholesky_solve`. - -##### Returns: - - Solution to `A X = RHS`, shape `[M, K]`. The solutions to the `K` systems. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.complex_abs.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.complex_abs.md deleted file mode 100644 index 1cb76668d6..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.complex_abs.md +++ /dev/null @@ -1,26 +0,0 @@ -### `tf.complex_abs(x, name=None)` {#complex_abs} - -Computes the complex absolute value of a tensor. - -Given a tensor `x` of complex numbers, this operation returns a tensor of type -`float` or `double` that is the absolute value of each element in `x`. All -elements in `x` must be complex numbers of the form \\(a + bj\\). The -absolute value is computed as \\( \sqrt{a^2 + b^2}\\). - -For example: - -``` -# tensor 'x' is [[-2.25 + 4.75j], [-3.25 + 5.75j]] -tf.complex_abs(x) ==> [5.25594902, 6.60492229] -``` - -##### Args: - - -* `x`: A `Tensor` of type `complex64` or `complex128`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `float32` or `float64`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.constant.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.constant.md deleted file mode 100644 index ff34b6eeb1..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.constant.md +++ /dev/null @@ -1,50 +0,0 @@ -### `tf.constant(value, dtype=None, shape=None, name='Const')` {#constant} - -Creates a constant tensor. - - The resulting tensor is populated with values of type `dtype`, as - specified by arguments `value` and (optionally) `shape` (see examples - below). - - The argument `value` can be a constant value, or a list of values of type - `dtype`. If `value` is a list, then the length of the list must be less - than or equal to the number of elements implied by the `shape` argument (if - specified). In the case where the list length is less than the number of - elements specified by `shape`, the last element in the list will be used - to fill the remaining entries. - - The argument `shape` is optional. If present, it specifies the dimensions of - the resulting tensor. If not present, the shape of `value` is used. - - If the argument `dtype` is not specified, then the type is inferred from - the type of `value`. - - For example: - - ```python - # Constant 1-D Tensor populated with value list. - tensor = tf.constant([1, 2, 3, 4, 5, 6, 7]) => [1 2 3 4 5 6 7] - - # Constant 2-D tensor populated with scalar value -1. - tensor = tf.constant(-1.0, shape=[2, 3]) => [[-1. -1. -1.] - [-1. -1. -1.]] - ``` - -##### Args: - - -* `value`: A constant value (or list) of output type `dtype`. - - -* `dtype`: The type of the elements of the resulting tensor. - - -* `shape`: Optional dimensions of resulting tensor. - - -* `name`: Optional name for the tensor. - -##### Returns: - - A Constant Tensor. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.constant_initializer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.constant_initializer.md new file mode 100644 index 0000000000..4ac524d708 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.constant_initializer.md @@ -0,0 +1,20 @@ +### `tf.constant_initializer(value=0.0, dtype=tf.float32)` {#constant_initializer} + +Returns an initializer that generates tensors with a single value. + +##### Args: + + +* `value`: A Python scalar. All elements of the initialized variable + will be set to this value. +* `dtype`: The data type. Only floating point types are supported. + +##### Returns: + + An initializer that generates tensors with a single value. + +##### Raises: + + +* `ValueError`: if `dtype` is not a floating point type. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.distributions.Exponential.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.distributions.Exponential.md deleted file mode 100644 index 62181034b9..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.distributions.Exponential.md +++ /dev/null @@ -1,260 +0,0 @@ -The Exponential distribution with rate parameter lam. - -The PDF of this distribution is: - -```pdf(x) = (lam * e^(-lam * x)), x > 0``` - -Note that the Exponential distribution is a special case of the Gamma -distribution, with Exponential(lam) = Gamma(1, lam). -- - - - -#### `tf.contrib.distributions.Exponential.__init__(lam, name='Exponential')` {#Exponential.__init__} - - - - -- - - - -#### `tf.contrib.distributions.Exponential.alpha` {#Exponential.alpha} - -Shape parameter. - - -- - - - -#### `tf.contrib.distributions.Exponential.batch_shape(name='batch_shape')` {#Exponential.batch_shape} - -Batch dimensions of this instance as a 1-D int32 `Tensor`. - -The product of the dimensions of the `batch_shape` is the number of -independent distributions of this kind the instance represents. - -##### Args: - - -* `name`: name to give to the op - -##### Returns: - - `Tensor` `batch_shape` - - -- - - - -#### `tf.contrib.distributions.Exponential.beta` {#Exponential.beta} - -Inverse scale parameter. - - -- - - - -#### `tf.contrib.distributions.Exponential.cdf(x, name='cdf')` {#Exponential.cdf} - -CDF of observations `x` under these Gamma distribution(s). - -##### Args: - - -* `x`: tensor of dtype `dtype`, must be broadcastable with `alpha` and `beta`. -* `name`: The name to give this op. - -##### Returns: - - -* `cdf`: tensor of dtype `dtype`, the CDFs of `x`. - - -- - - - -#### `tf.contrib.distributions.Exponential.dtype` {#Exponential.dtype} - -dtype of samples from this distribution. - - -- - - - -#### `tf.contrib.distributions.Exponential.entropy(name='entropy')` {#Exponential.entropy} - -The entropy of Gamma distribution(s). - -This is defined to be - -``` -entropy = alpha - log(beta) + log(Gamma(alpha)) - + (1-alpha)digamma(alpha) -``` - -where digamma(alpha) is the digamma function. - -##### Args: - - -* `name`: The name to give this op. - -##### Returns: - - -* `entropy`: tensor of dtype `dtype`, the entropy. - - -- - - - -#### `tf.contrib.distributions.Exponential.event_shape(name='event_shape')` {#Exponential.event_shape} - -Shape of a sample from a single distribution as a 1-D int32 `Tensor`. - -##### Args: - - -* `name`: name to give to the op - -##### Returns: - - `Tensor` `event_shape` - - -- - - - -#### `tf.contrib.distributions.Exponential.get_batch_shape()` {#Exponential.get_batch_shape} - -`TensorShape` available at graph construction time. - -Same meaning as `batch_shape`. May be only partially defined. - -##### Returns: - - `TensorShape` object. - - -- - - - -#### `tf.contrib.distributions.Exponential.get_event_shape()` {#Exponential.get_event_shape} - -`TensorShape` available at graph construction time. - -Same meaning as `event_shape`. May be only partially defined. - -##### Returns: - - `TensorShape` object. - - -- - - - -#### `tf.contrib.distributions.Exponential.is_reparameterized` {#Exponential.is_reparameterized} - - - - -- - - - -#### `tf.contrib.distributions.Exponential.lam` {#Exponential.lam} - - - - -- - - - -#### `tf.contrib.distributions.Exponential.log_cdf(x, name='log_cdf')` {#Exponential.log_cdf} - -Log CDF of observations `x` under these Gamma distribution(s). - -##### Args: - - -* `x`: tensor of dtype `dtype`, must be broadcastable with `alpha` and `beta`. -* `name`: The name to give this op. - -##### Returns: - - -* `log_cdf`: tensor of dtype `dtype`, the log-CDFs of `x`. - - -- - - - -#### `tf.contrib.distributions.Exponential.log_pdf(x, name='log_pdf')` {#Exponential.log_pdf} - -Log pdf of observations in `x` under these Gamma distribution(s). - -##### Args: - - -* `x`: tensor of dtype `dtype`, must be broadcastable with `alpha` and `beta`. -* `name`: The name to give this op. - -##### Returns: - - -* `log_pdf`: tensor of dtype `dtype`, the log-PDFs of `x`. - -##### Raises: - - -* `TypeError`: if `x` and `alpha` are different dtypes. - - -- - - - -#### `tf.contrib.distributions.Exponential.mean` {#Exponential.mean} - -Mean of each batch member. - - -- - - - -#### `tf.contrib.distributions.Exponential.name` {#Exponential.name} - -Name to prepend to all ops. - - -- - - - -#### `tf.contrib.distributions.Exponential.pdf(x, name='pdf')` {#Exponential.pdf} - -Pdf of observations in `x` under these Gamma distribution(s). - -##### Args: - - -* `x`: tensor of dtype `dtype`, must be broadcastable with `alpha` and `beta`. -* `name`: The name to give this op. - -##### Returns: - - -* `pdf`: tensor of dtype `dtype`, the PDFs of `x` - -##### Raises: - - -* `TypeError`: if `x` and `alpha` are different dtypes. - - -- - - - -#### `tf.contrib.distributions.Exponential.sample(n, seed=None, name=None)` {#Exponential.sample} - -Sample `n` observations from the Exponential Distributions. - -##### Args: - - -* `n`: `Scalar`, type int32, the number of observations to sample. -* `seed`: Python integer, the random seed. -* `name`: The name to give this op. - -##### Returns: - - -* `samples`: `[n, ...]`, a `Tensor` of `n` samples for each - of the distributions determined by the hyperparameters. - - -- - - - -#### `tf.contrib.distributions.Exponential.variance` {#Exponential.variance} - -Variance of each batch member. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.distributions.MultivariateNormal.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.distributions.MultivariateNormal.md deleted file mode 100644 index 258cb03ea8..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.distributions.MultivariateNormal.md +++ /dev/null @@ -1,218 +0,0 @@ -The Multivariate Normal distribution on `R^k`. - -The distribution has mean and covariance parameters mu (1-D), sigma (2-D), -or alternatively mean `mu` and factored covariance (cholesky decomposed -`sigma`) called `sigma_chol`. - -#### Mathematical details - -The PDF of this distribution is: - -``` -f(x) = (2*pi)^(-k/2) |det(sigma)|^(-1/2) exp(-1/2*(x-mu)^*.sigma^{-1}.(x-mu)) -``` - -where `.` denotes the inner product on `R^k` and `^*` denotes transpose. - -Alternatively, if `sigma` is positive definite, it can be represented in terms -of its lower triangular cholesky factorization - -```sigma = sigma_chol . sigma_chol^*``` - -and the pdf above allows simpler computation: - -``` -|det(sigma)| = reduce_prod(diag(sigma_chol))^2 -x_whitened = sigma^{-1/2} . (x - mu) = tri_solve(sigma_chol, x - mu) -(x-mu)^* .sigma^{-1} . (x-mu) = x_whitened^* . x_whitened -``` - -where `tri_solve()` solves a triangular system of equations. - -#### Examples - -A single multi-variate Gaussian distribution is defined by a vector of means -of length `k`, and a covariance matrix of shape `k x k`. - -Extra leading dimensions, if provided, allow for batches. - -```python -# Initialize a single 3-variate Gaussian with diagonal covariance. -mu = [1, 2, 3] -sigma = [[1, 0, 0], [0, 3, 0], [0, 0, 2]] -dist = tf.contrib.distributions.MultivariateNormal(mu=mu, sigma=sigma) - -# Evaluate this on an observation in R^3, returning a scalar. -dist.pdf([-1, 0, 1]) - -# Initialize a batch of two 3-variate Gaussians. -mu = [[1, 2, 3], [11, 22, 33]] -sigma = ... # shape 2 x 3 x 3 -dist = tf.contrib.distributions.MultivariateNormal(mu=mu, sigma=sigma) - -# Evaluate this on a two observations, each in R^3, returning a length two -# tensor. -x = [[-1, 0, 1], [-11, 0, 11]] # Shape 2 x 3. -dist.pdf(x) -``` -- - - - -#### `tf.contrib.distributions.MultivariateNormal.__init__(mu, sigma=None, sigma_chol=None, name=None)` {#MultivariateNormal.__init__} - -Multivariate Normal distributions on `R^k`. - -User must provide means `mu`, which are tensors of rank `N+1` (`N >= 0`) -with the last dimension having length `k`. - -User must provide exactly one of `sigma` (the covariance matrices) or -`sigma_chol` (the cholesky decompositions of the covariance matrices). -`sigma` or `sigma_chol` must be of rank `N+2`. The last two dimensions -must both have length `k`. The first `N` dimensions correspond to batch -indices. - -If `sigma_chol` is not provided, the batch cholesky factorization of `sigma` -is calculated for you. - -The shapes of `mu` and `sigma` must match for the first `N` dimensions. - -Regardless of which parameter is provided, the covariance matrices must all -be **positive definite** (an error is raised if one of them is not). - -##### Args: - - -* `mu`: (N+1)-D. `float` or `double` tensor, the means of the distributions. -* `sigma`: (N+2)-D. (optional) `float` or `double` tensor, the covariances - of the distribution(s). The first `N+1` dimensions must match - those of `mu`. Must be batch-positive-definite. -* `sigma_chol`: (N+2)-D. (optional) `float` or `double` tensor, a - lower-triangular factorization of `sigma` - (`sigma = sigma_chol . sigma_chol^*`). The first `N+1` dimensions - must match those of `mu`. The tensor itself need not be batch - lower triangular: we ignore the upper triangular part. However, - the batch diagonals must be positive (i.e., sigma_chol must be - batch-positive-definite). -* `name`: The name to give Ops created by the initializer. - -##### Raises: - - -* `ValueError`: if neither sigma nor sigma_chol is provided. -* `TypeError`: if mu and sigma (resp. sigma_chol) are different dtypes. - - -- - - - -#### `tf.contrib.distributions.MultivariateNormal.dtype` {#MultivariateNormal.dtype} - - - - -- - - - -#### `tf.contrib.distributions.MultivariateNormal.entropy(name=None)` {#MultivariateNormal.entropy} - -The entropies of these Multivariate Normals. - -##### Args: - - -* `name`: The name to give this op. - -##### Returns: - - -* `entropy`: tensor of dtype `dtype`, the entropies. - - -- - - - -#### `tf.contrib.distributions.MultivariateNormal.is_reparameterized` {#MultivariateNormal.is_reparameterized} - - - - -- - - - -#### `tf.contrib.distributions.MultivariateNormal.log_pdf(x, name=None)` {#MultivariateNormal.log_pdf} - -Log pdf of observations `x` given these Multivariate Normals. - -##### Args: - - -* `x`: tensor of dtype `dtype`, must be broadcastable with `mu`. -* `name`: The name to give this op. - -##### Returns: - - -* `log_pdf`: tensor of dtype `dtype`, the log-PDFs of `x`. - - -- - - - -#### `tf.contrib.distributions.MultivariateNormal.mean` {#MultivariateNormal.mean} - - - - -- - - - -#### `tf.contrib.distributions.MultivariateNormal.mu` {#MultivariateNormal.mu} - - - - -- - - - -#### `tf.contrib.distributions.MultivariateNormal.pdf(x, name=None)` {#MultivariateNormal.pdf} - -The PDF of observations `x` under these Multivariate Normals. - -##### Args: - - -* `x`: tensor of dtype `dtype`, must be broadcastable with `mu` and `sigma`. -* `name`: The name to give this op. - -##### Returns: - - -* `pdf`: tensor of dtype `dtype`, the pdf values of `x`. - - -- - - - -#### `tf.contrib.distributions.MultivariateNormal.sample(n, seed=None, name=None)` {#MultivariateNormal.sample} - -Sample `n` observations from the Multivariate Normal Distributions. - -##### Args: - - -* `n`: `Scalar`, type int32, the number of observations to sample. -* `seed`: Python integer, the random seed. -* `name`: The name to give this op. - -##### Returns: - - -* `samples`: `[n, ...]`, a `Tensor` of `n` samples for each - of the distributions determined by broadcasting the hyperparameters. - - -- - - - -#### `tf.contrib.distributions.MultivariateNormal.sigma` {#MultivariateNormal.sigma} - - - - -- - - - -#### `tf.contrib.distributions.MultivariateNormal.sigma_det` {#MultivariateNormal.sigma_det} - - - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.layers.summarize_collection.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.layers.summarize_collection.md deleted file mode 100644 index b1b5f56056..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.layers.summarize_collection.md +++ /dev/null @@ -1,4 +0,0 @@ -### `tf.contrib.layers.summarize_collection(collection, name_filter=None, summarizer=summarize_tensor)` {#summarize_collection} - -Summarize a graph collection of tensors, possibly filtered by name. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.layers.summarize_tensors.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.layers.summarize_tensors.md new file mode 100644 index 0000000000..608999b437 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.layers.summarize_tensors.md @@ -0,0 +1,4 @@ +### `tf.contrib.layers.summarize_tensors(tensors, summarizer=summarize_tensor)` {#summarize_tensors} + +Summarize a set of tensors. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.layers.variance_scaling_initializer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.layers.variance_scaling_initializer.md deleted file mode 100644 index c82f924432..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.layers.variance_scaling_initializer.md +++ /dev/null @@ -1,47 +0,0 @@ -### `tf.contrib.layers.variance_scaling_initializer(factor=2.0, mode='FAN_IN', uniform=False, seed=None, dtype=tf.float32)` {#variance_scaling_initializer} - -Returns an initializer that generates tensors without scaling variance. - -When initializing a deep network, it is in principle advantageous to keep -the scale of the input variance constant, so it does not explode or diminish -by reaching the final layer. This initializer use the following formula: - if mode='FAN_IN': # Count only number of input connections. - n = fan_in - elif mode='FAN_OUT': # Count only number of output connections. - n = fan_out - elif mode='FAN_AVG': # Average number of inputs and output connections. - n = (fan_in + fan_out)/2.0 - - truncated_normal(shape, 0.0, stddev=sqrt(factor / n)) - -To get http://arxiv.org/pdf/1502.01852v1.pdf use (Default): - - factor=2.0 mode='FAN_IN' uniform=False -To get http://arxiv.org/abs/1408.5093 use: - - factor=1.0 mode='FAN_IN' uniform=True -To get http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf use: - - factor=1.0 mode='FAN_AVG' uniform=True. -To get xavier_initializer use either: - - factor=1.0 mode='FAN_AVG' uniform=True. - - factor=1.0 mode='FAN_AVG' uniform=False. - -##### Args: - - -* `factor`: Float. A multiplicative factor. -* `mode`: String. 'FAN_IN', 'FAN_OUT', 'FAN_AVG'. -* `uniform`: Whether to use uniform or normal distributed random initialization. -* `seed`: A Python integer. Used to create random seeds. See - [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) - for behavior. -* `dtype`: The data type. Only floating point types are supported. - -##### Returns: - - An initializer that generates tensors with unit variance. - -##### Raises: - - -* `ValueError`: if `dtype` is not a floating point type. -* `TypeError`: if `mode` is not in ['FAN_IN', 'FAN_OUT', 'FAN_AVG']. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.Estimator.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.Estimator.md deleted file mode 100644 index 00f12fa0a1..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.Estimator.md +++ /dev/null @@ -1,215 +0,0 @@ -Estimator class is the basic TensorFlow model trainer/evaluator. - -Parameters: - model_fn: Model function, takes features and targets tensors or dicts of - tensors and returns predictions and loss tensors. - E.g. `(features, targets) -> (predictions, loss)`. - model_dir: Directory to save model parameters, graph and etc. - classification: boolean, true if classification problem. - learning_rate: learning rate for the model. - optimizer: optimizer for the model, can be: - string: name of optimizer, like 'SGD', 'Adam', 'Adagrad', 'Ftl', - 'Momentum', 'RMSProp', 'Momentum'). - Full list in contrib/layers/optimizers.py - class: sub-class of Optimizer - (like tf.train.GradientDescentOptimizer). - clip_gradients: clip_norm value for call to `clip_by_global_norm`. None - denotes no gradient clipping. - config: Configuration object. -- - - - -#### `tf.contrib.learn.Estimator.__init__(model_fn=None, model_dir=None, classification=True, learning_rate=0.1, optimizer='Adagrad', clip_gradients=None, config=None)` {#Estimator.__init__} - - - - -- - - - -#### `tf.contrib.learn.Estimator.evaluate(x=None, y=None, input_fn=None, feed_fn=None, batch_size=32, steps=None, metrics=None, name=None)` {#Estimator.evaluate} - -Evaluates given model with provided evaluation data. - -##### Args: - - -* `x`: features. -* `y`: targets. -* `input_fn`: Input function. If set, x and y must be None. -* `feed_fn`: Function creating a feed dict every time it is called. Called - once per iteration. -* `batch_size`: minibatch size to use on the input, defaults to 32. Ignored - if input_fn is set. -* `steps`: Number of steps to evalute for. -* `metrics`: Dict of metric ops to run. If None, the default metric functions - are used; if {}, no metrics are used. -* `name`: Name of the evaluation if user needs to run multiple evaluation on - different data sets, such as evaluate on training data vs test data. - -##### Returns: - - Returns self. - -##### Raises: - - -* `ValueError`: If x or y are not None while input_fn or feed_fn is not None. - - -- - - - -#### `tf.contrib.learn.Estimator.fit(x, y, steps, batch_size=32, monitors=None)` {#Estimator.fit} - -Trains a model given training data X and y. - -##### Args: - - -* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be - iterator that returns arrays of features. The training input - samples for fitting the model. -* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be - iterator that returns array of targets. The training target values - (class labels in classification, real numbers in regression). -* `steps`: number of steps to train model for. -* `batch_size`: minibatch size to use on the input, defaults to 32. -* `monitors`: List of `BaseMonitor` subclass instances. Used for callbacks - inside the training loop. - -##### Returns: - - Returns self. - - -- - - - -#### `tf.contrib.learn.Estimator.get_params(deep=True)` {#Estimator.get_params} - -Get parameters for this estimator. - -##### Args: - - -* `deep`: boolean, optional - If True, will return the parameters for this estimator and - contained subobjects that are estimators. - -##### Returns: - - params : mapping of string to any - Parameter names mapped to their values. - - -- - - - -#### `tf.contrib.learn.Estimator.model_dir` {#Estimator.model_dir} - - - - -- - - - -#### `tf.contrib.learn.Estimator.partial_fit(x, y, steps=1, batch_size=32, monitors=None)` {#Estimator.partial_fit} - -Incremental fit on a batch of samples. - -This method is expected to be called several times consecutively -on different or the same chunks of the dataset. This either can -implement iterative training or out-of-core/online training. - -This is especially useful when the whole dataset is too big to -fit in memory at the same time. Or when model is taking long time -to converge, and you want to split up training into subparts. - -##### Args: - - -* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be - iterator that returns arrays of features. The training input - samples for fitting the model. -* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be - iterator that returns array of targets. The training target values - (class label in classification, real numbers in regression). -* `steps`: number of steps to train model for. -* `batch_size`: minibatch size to use on the input, defaults to 32. -* `monitors`: List of `BaseMonitor` subclass instances. Used for callbacks - inside the training loop. - -##### Returns: - - Returns self. - - -- - - - -#### `tf.contrib.learn.Estimator.predict(x=None, input_fn=None, axis=None, batch_size=None)` {#Estimator.predict} - -Returns predictions for given features. - -##### Args: - - -* `x`: features. -* `input_fn`: Input function. If set, x must be None. -* `axis`: Axis on which to argmax (for classification). - Last axis is used by default. -* `batch_size`: Override default batch size. - -##### Returns: - - Numpy array of predicted classes or regression values. - - -- - - - -#### `tf.contrib.learn.Estimator.predict_proba(x=None, input_fn=None, batch_size=None)` {#Estimator.predict_proba} - -Returns prediction probabilities for given features (classification). - -##### Args: - - -* `x`: features. -* `input_fn`: Input function. If set, x and y must be None. -* `batch_size`: Override default batch size. - -##### Returns: - - Numpy array of predicted probabilities. - - -- - - - -#### `tf.contrib.learn.Estimator.set_params(**params)` {#Estimator.set_params} - -Set the parameters of this estimator. - -The method works on simple estimators as well as on nested objects -(such as pipelines). The former have parameters of the form -``__`` so that it's possible to update each -component of a nested object. - -##### Returns: - - self - - -- - - - -#### `tf.contrib.learn.Estimator.train(input_fn, steps, monitors=None)` {#Estimator.train} - -Trains a model given input builder function. - -##### Args: - - -* `input_fn`: Input builder function, returns tuple of dicts or - dict and Tensor. -* `steps`: number of steps to train model for. -* `monitors`: List of `BaseMonitor` subclass instances. Used for callbacks - inside the training loop. - -##### Returns: - - Returns self. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.TensorFlowClassifier.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.TensorFlowClassifier.md new file mode 100644 index 0000000000..63588166ec --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.TensorFlowClassifier.md @@ -0,0 +1,279 @@ +TensorFlow Linear Classifier model. +- - - + +#### `tf.contrib.learn.TensorFlowClassifier.__init__(n_classes, batch_size=32, steps=200, optimizer='Adagrad', learning_rate=0.1, class_weight=None, clip_gradients=5.0, continue_training=False, config=None, verbose=1)` {#TensorFlowClassifier.__init__} + + + + +- - - + +#### `tf.contrib.learn.TensorFlowClassifier.bias_` {#TensorFlowClassifier.bias_} + +Returns weights of the linear classifier. + + +- - - + +#### `tf.contrib.learn.TensorFlowClassifier.evaluate(x=None, y=None, input_fn=None, steps=None)` {#TensorFlowClassifier.evaluate} + +See base class. + + +- - - + +#### `tf.contrib.learn.TensorFlowClassifier.fit(x, y, steps=None, monitors=None, logdir=None)` {#TensorFlowClassifier.fit} + +Builds a neural network model given provided `model_fn` and training +data X and y. + +Note: called first time constructs the graph and initializers +variables. Consecutives times it will continue training the same model. +This logic follows partial_fit() interface in scikit-learn. + +To restart learning, create new estimator. + +##### Args: + + +* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be + iterator that returns arrays of features. The training input + samples for fitting the model. + +* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be + iterator that returns array of targets. The training target values + (class labels in classification, real numbers in regression). + +* `steps`: int, number of steps to train. + If None or 0, train for `self.steps`. +* `monitors`: List of `BaseMonitor` objects to print training progress and + invoke early stopping. +* `logdir`: the directory to save the log file that can be used for + optional visualization. + +##### Returns: + + Returns self. + + +- - - + +#### `tf.contrib.learn.TensorFlowClassifier.get_params(deep=True)` {#TensorFlowClassifier.get_params} + +Get parameters for this estimator. + +##### Args: + + +* `deep`: boolean, optional + If True, will return the parameters for this estimator and + contained subobjects that are estimators. + +##### Returns: + + params : mapping of string to any + Parameter names mapped to their values. + + +- - - + +#### `tf.contrib.learn.TensorFlowClassifier.get_tensor(name)` {#TensorFlowClassifier.get_tensor} + +Returns tensor by name. + +##### Args: + + +* `name`: string, name of the tensor. + +##### Returns: + + Tensor. + + +- - - + +#### `tf.contrib.learn.TensorFlowClassifier.get_tensor_value(name)` {#TensorFlowClassifier.get_tensor_value} + +Returns value of the tensor give by name. + +##### Args: + + +* `name`: string, name of the tensor. + +##### Returns: + + Numpy array - value of the tensor. + + +- - - + +#### `tf.contrib.learn.TensorFlowClassifier.get_variable_names()` {#TensorFlowClassifier.get_variable_names} + +Returns list of all variable names in this model. + +##### Returns: + + List of names. + + +- - - + +#### `tf.contrib.learn.TensorFlowClassifier.model_dir` {#TensorFlowClassifier.model_dir} + + + + +- - - + +#### `tf.contrib.learn.TensorFlowClassifier.partial_fit(x, y)` {#TensorFlowClassifier.partial_fit} + +Incremental fit on a batch of samples. + +This method is expected to be called several times consecutively +on different or the same chunks of the dataset. This either can +implement iterative training or out-of-core/online training. + +This is especially useful when the whole dataset is too big to +fit in memory at the same time. Or when model is taking long time +to converge, and you want to split up training into subparts. + +##### Args: + + +* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be + iterator that returns arrays of features. The training input + samples for fitting the model. + +* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be + iterator that returns array of targets. The training target values + (class label in classification, real numbers in regression). + +##### Returns: + + Returns self. + + +- - - + +#### `tf.contrib.learn.TensorFlowClassifier.predict(x, axis=1, batch_size=None)` {#TensorFlowClassifier.predict} + +Predict class or regression for X. + +For a classification model, the predicted class for each sample in X is +returned. For a regression model, the predicted value based on X is +returned. + +##### Args: + + +* `x`: array-like matrix, [n_samples, n_features...] or iterator. +* `axis`: Which axis to argmax for classification. + By default axis 1 (next after batch) is used. + Use 2 for sequence predictions. +* `batch_size`: If test set is too big, use batch size to split + it into mini batches. By default the batch_size member + variable is used. + +##### Returns: + + +* `y`: array of shape [n_samples]. The predicted classes or predicted + value. + + +- - - + +#### `tf.contrib.learn.TensorFlowClassifier.predict_proba(x, batch_size=None)` {#TensorFlowClassifier.predict_proba} + +Predict class probability of the input samples X. + +##### Args: + + +* `x`: array-like matrix, [n_samples, n_features...] or iterator. +* `batch_size`: If test set is too big, use batch size to split + it into mini batches. By default the batch_size member variable is used. + +##### Returns: + + +* `y`: array of shape [n_samples, n_classes]. The predicted + probabilities for each class. + + +- - - + +#### `tf.contrib.learn.TensorFlowClassifier.restore(cls, path, config=None)` {#TensorFlowClassifier.restore} + +Restores model from give path. + +##### Args: + + +* `path`: Path to the checkpoints and other model information. +* `config`: RunConfig object that controls the configurations of the session, + e.g. num_cores, gpu_memory_fraction, etc. This is allowed to be + reconfigured. + +##### Returns: + + Estimator, object of the subclass of TensorFlowEstimator. + + +- - - + +#### `tf.contrib.learn.TensorFlowClassifier.save(path)` {#TensorFlowClassifier.save} + +Saves checkpoints and graph to given path. + +##### Args: + + +* `path`: Folder to save model to. + + +- - - + +#### `tf.contrib.learn.TensorFlowClassifier.set_params(**params)` {#TensorFlowClassifier.set_params} + +Set the parameters of this estimator. + +The method works on simple estimators as well as on nested objects +(such as pipelines). The former have parameters of the form +``__`` so that it's possible to update each +component of a nested object. + +##### Returns: + + self + + +- - - + +#### `tf.contrib.learn.TensorFlowClassifier.train(input_fn, steps, monitors=None)` {#TensorFlowClassifier.train} + +Trains a model given input builder function. + +##### Args: + + +* `input_fn`: Input builder function, returns tuple of dicts or + dict and Tensor. +* `steps`: number of steps to train model for. +* `monitors`: List of `BaseMonitor` subclass instances. Used for callbacks + inside the training loop. + +##### Returns: + + Returns self. + + +- - - + +#### `tf.contrib.learn.TensorFlowClassifier.weights_` {#TensorFlowClassifier.weights_} + +Returns weights of the linear classifier. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.TensorFlowEstimator.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.TensorFlowEstimator.md new file mode 100644 index 0000000000..c3270290b9 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.TensorFlowEstimator.md @@ -0,0 +1,295 @@ +Base class for all TensorFlow estimators. + +Parameters: + model_fn: Model function, that takes input X, y tensors and outputs + prediction and loss tensors. + n_classes: Number of classes in the target. + batch_size: Mini batch size. + steps: Number of steps to run over data. + optimizer: Optimizer name (or class), for example "SGD", "Adam", + "Adagrad". + learning_rate: If this is constant float value, no decay function is used. + Instead, a customized decay function can be passed that accepts + global_step as parameter and returns a Tensor. + e.g. exponential decay function: + def exp_decay(global_step): + return tf.train.exponential_decay( + learning_rate=0.1, global_step, + decay_steps=2, decay_rate=0.001) + clip_gradients: Clip norm of the gradients to this value to stop + gradient explosion. + class_weight: None or list of n_classes floats. Weight associated with + classes for loss computation. If not given, all classes are supposed to + have weight one. + continue_training: when continue_training is True, once initialized + model will be continuely trained on every call of fit. + config: RunConfig object that controls the configurations of the + session, e.g. num_cores, gpu_memory_fraction, etc. + verbose: Controls the verbosity, possible values: + 0: the algorithm and debug information is muted. + 1: trainer prints the progress. + 2: log device placement is printed. +- - - + +#### `tf.contrib.learn.TensorFlowEstimator.__init__(model_fn, n_classes, batch_size=32, steps=200, optimizer='Adagrad', learning_rate=0.1, clip_gradients=5.0, class_weight=None, continue_training=False, config=None, verbose=1)` {#TensorFlowEstimator.__init__} + + + + +- - - + +#### `tf.contrib.learn.TensorFlowEstimator.evaluate(x=None, y=None, input_fn=None, steps=None)` {#TensorFlowEstimator.evaluate} + +See base class. + + +- - - + +#### `tf.contrib.learn.TensorFlowEstimator.fit(x, y, steps=None, monitors=None, logdir=None)` {#TensorFlowEstimator.fit} + +Builds a neural network model given provided `model_fn` and training +data X and y. + +Note: called first time constructs the graph and initializers +variables. Consecutives times it will continue training the same model. +This logic follows partial_fit() interface in scikit-learn. + +To restart learning, create new estimator. + +##### Args: + + +* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be + iterator that returns arrays of features. The training input + samples for fitting the model. + +* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be + iterator that returns array of targets. The training target values + (class labels in classification, real numbers in regression). + +* `steps`: int, number of steps to train. + If None or 0, train for `self.steps`. +* `monitors`: List of `BaseMonitor` objects to print training progress and + invoke early stopping. +* `logdir`: the directory to save the log file that can be used for + optional visualization. + +##### Returns: + + Returns self. + + +- - - + +#### `tf.contrib.learn.TensorFlowEstimator.get_params(deep=True)` {#TensorFlowEstimator.get_params} + +Get parameters for this estimator. + +##### Args: + + +* `deep`: boolean, optional + If True, will return the parameters for this estimator and + contained subobjects that are estimators. + +##### Returns: + + params : mapping of string to any + Parameter names mapped to their values. + + +- - - + +#### `tf.contrib.learn.TensorFlowEstimator.get_tensor(name)` {#TensorFlowEstimator.get_tensor} + +Returns tensor by name. + +##### Args: + + +* `name`: string, name of the tensor. + +##### Returns: + + Tensor. + + +- - - + +#### `tf.contrib.learn.TensorFlowEstimator.get_tensor_value(name)` {#TensorFlowEstimator.get_tensor_value} + +Returns value of the tensor give by name. + +##### Args: + + +* `name`: string, name of the tensor. + +##### Returns: + + Numpy array - value of the tensor. + + +- - - + +#### `tf.contrib.learn.TensorFlowEstimator.get_variable_names()` {#TensorFlowEstimator.get_variable_names} + +Returns list of all variable names in this model. + +##### Returns: + + List of names. + + +- - - + +#### `tf.contrib.learn.TensorFlowEstimator.model_dir` {#TensorFlowEstimator.model_dir} + + + + +- - - + +#### `tf.contrib.learn.TensorFlowEstimator.partial_fit(x, y)` {#TensorFlowEstimator.partial_fit} + +Incremental fit on a batch of samples. + +This method is expected to be called several times consecutively +on different or the same chunks of the dataset. This either can +implement iterative training or out-of-core/online training. + +This is especially useful when the whole dataset is too big to +fit in memory at the same time. Or when model is taking long time +to converge, and you want to split up training into subparts. + +##### Args: + + +* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be + iterator that returns arrays of features. The training input + samples for fitting the model. + +* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be + iterator that returns array of targets. The training target values + (class label in classification, real numbers in regression). + +##### Returns: + + Returns self. + + +- - - + +#### `tf.contrib.learn.TensorFlowEstimator.predict(x, axis=1, batch_size=None)` {#TensorFlowEstimator.predict} + +Predict class or regression for X. + +For a classification model, the predicted class for each sample in X is +returned. For a regression model, the predicted value based on X is +returned. + +##### Args: + + +* `x`: array-like matrix, [n_samples, n_features...] or iterator. +* `axis`: Which axis to argmax for classification. + By default axis 1 (next after batch) is used. + Use 2 for sequence predictions. +* `batch_size`: If test set is too big, use batch size to split + it into mini batches. By default the batch_size member + variable is used. + +##### Returns: + + +* `y`: array of shape [n_samples]. The predicted classes or predicted + value. + + +- - - + +#### `tf.contrib.learn.TensorFlowEstimator.predict_proba(x, batch_size=None)` {#TensorFlowEstimator.predict_proba} + +Predict class probability of the input samples X. + +##### Args: + + +* `x`: array-like matrix, [n_samples, n_features...] or iterator. +* `batch_size`: If test set is too big, use batch size to split + it into mini batches. By default the batch_size member variable is used. + +##### Returns: + + +* `y`: array of shape [n_samples, n_classes]. The predicted + probabilities for each class. + + +- - - + +#### `tf.contrib.learn.TensorFlowEstimator.restore(cls, path, config=None)` {#TensorFlowEstimator.restore} + +Restores model from give path. + +##### Args: + + +* `path`: Path to the checkpoints and other model information. +* `config`: RunConfig object that controls the configurations of the session, + e.g. num_cores, gpu_memory_fraction, etc. This is allowed to be + reconfigured. + +##### Returns: + + Estimator, object of the subclass of TensorFlowEstimator. + + +- - - + +#### `tf.contrib.learn.TensorFlowEstimator.save(path)` {#TensorFlowEstimator.save} + +Saves checkpoints and graph to given path. + +##### Args: + + +* `path`: Folder to save model to. + + +- - - + +#### `tf.contrib.learn.TensorFlowEstimator.set_params(**params)` {#TensorFlowEstimator.set_params} + +Set the parameters of this estimator. + +The method works on simple estimators as well as on nested objects +(such as pipelines). The former have parameters of the form +``__`` so that it's possible to update each +component of a nested object. + +##### Returns: + + self + + +- - - + +#### `tf.contrib.learn.TensorFlowEstimator.train(input_fn, steps, monitors=None)` {#TensorFlowEstimator.train} + +Trains a model given input builder function. + +##### Args: + + +* `input_fn`: Input builder function, returns tuple of dicts or + dict and Tensor. +* `steps`: number of steps to train model for. +* `monitors`: List of `BaseMonitor` subclass instances. Used for callbacks + inside the training loop. + +##### Returns: + + Returns self. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.TensorFlowLinearRegressor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.TensorFlowLinearRegressor.md deleted file mode 100644 index 6c793e1b90..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.TensorFlowLinearRegressor.md +++ /dev/null @@ -1,279 +0,0 @@ -TensorFlow Linear Regression model. -- - - - -#### `tf.contrib.learn.TensorFlowLinearRegressor.__init__(n_classes=0, batch_size=32, steps=200, optimizer='Adagrad', learning_rate=0.1, clip_gradients=5.0, continue_training=False, config=None, verbose=1)` {#TensorFlowLinearRegressor.__init__} - - - - -- - - - -#### `tf.contrib.learn.TensorFlowLinearRegressor.bias_` {#TensorFlowLinearRegressor.bias_} - -Returns bias of the linear regression. - - -- - - - -#### `tf.contrib.learn.TensorFlowLinearRegressor.evaluate(x=None, y=None, input_fn=None, steps=None)` {#TensorFlowLinearRegressor.evaluate} - -See base class. - - -- - - - -#### `tf.contrib.learn.TensorFlowLinearRegressor.fit(x, y, steps=None, monitors=None, logdir=None)` {#TensorFlowLinearRegressor.fit} - -Builds a neural network model given provided `model_fn` and training -data X and y. - -Note: called first time constructs the graph and initializers -variables. Consecutives times it will continue training the same model. -This logic follows partial_fit() interface in scikit-learn. - -To restart learning, create new estimator. - -##### Args: - - -* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be - iterator that returns arrays of features. The training input - samples for fitting the model. - -* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be - iterator that returns array of targets. The training target values - (class labels in classification, real numbers in regression). - -* `steps`: int, number of steps to train. - If None or 0, train for `self.steps`. -* `monitors`: List of `BaseMonitor` objects to print training progress and - invoke early stopping. -* `logdir`: the directory to save the log file that can be used for - optional visualization. - -##### Returns: - - Returns self. - - -- - - - -#### `tf.contrib.learn.TensorFlowLinearRegressor.get_params(deep=True)` {#TensorFlowLinearRegressor.get_params} - -Get parameters for this estimator. - -##### Args: - - -* `deep`: boolean, optional - If True, will return the parameters for this estimator and - contained subobjects that are estimators. - -##### Returns: - - params : mapping of string to any - Parameter names mapped to their values. - - -- - - - -#### `tf.contrib.learn.TensorFlowLinearRegressor.get_tensor(name)` {#TensorFlowLinearRegressor.get_tensor} - -Returns tensor by name. - -##### Args: - - -* `name`: string, name of the tensor. - -##### Returns: - - Tensor. - - -- - - - -#### `tf.contrib.learn.TensorFlowLinearRegressor.get_tensor_value(name)` {#TensorFlowLinearRegressor.get_tensor_value} - -Returns value of the tensor give by name. - -##### Args: - - -* `name`: string, name of the tensor. - -##### Returns: - - Numpy array - value of the tensor. - - -- - - - -#### `tf.contrib.learn.TensorFlowLinearRegressor.get_variable_names()` {#TensorFlowLinearRegressor.get_variable_names} - -Returns list of all variable names in this model. - -##### Returns: - - List of names. - - -- - - - -#### `tf.contrib.learn.TensorFlowLinearRegressor.model_dir` {#TensorFlowLinearRegressor.model_dir} - - - - -- - - - -#### `tf.contrib.learn.TensorFlowLinearRegressor.partial_fit(x, y)` {#TensorFlowLinearRegressor.partial_fit} - -Incremental fit on a batch of samples. - -This method is expected to be called several times consecutively -on different or the same chunks of the dataset. This either can -implement iterative training or out-of-core/online training. - -This is especially useful when the whole dataset is too big to -fit in memory at the same time. Or when model is taking long time -to converge, and you want to split up training into subparts. - -##### Args: - - -* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be - iterator that returns arrays of features. The training input - samples for fitting the model. - -* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be - iterator that returns array of targets. The training target values - (class label in classification, real numbers in regression). - -##### Returns: - - Returns self. - - -- - - - -#### `tf.contrib.learn.TensorFlowLinearRegressor.predict(x, axis=1, batch_size=None)` {#TensorFlowLinearRegressor.predict} - -Predict class or regression for X. - -For a classification model, the predicted class for each sample in X is -returned. For a regression model, the predicted value based on X is -returned. - -##### Args: - - -* `x`: array-like matrix, [n_samples, n_features...] or iterator. -* `axis`: Which axis to argmax for classification. - By default axis 1 (next after batch) is used. - Use 2 for sequence predictions. -* `batch_size`: If test set is too big, use batch size to split - it into mini batches. By default the batch_size member - variable is used. - -##### Returns: - - -* `y`: array of shape [n_samples]. The predicted classes or predicted - value. - - -- - - - -#### `tf.contrib.learn.TensorFlowLinearRegressor.predict_proba(x, batch_size=None)` {#TensorFlowLinearRegressor.predict_proba} - -Predict class probability of the input samples X. - -##### Args: - - -* `x`: array-like matrix, [n_samples, n_features...] or iterator. -* `batch_size`: If test set is too big, use batch size to split - it into mini batches. By default the batch_size member variable is used. - -##### Returns: - - -* `y`: array of shape [n_samples, n_classes]. The predicted - probabilities for each class. - - -- - - - -#### `tf.contrib.learn.TensorFlowLinearRegressor.restore(cls, path, config=None)` {#TensorFlowLinearRegressor.restore} - -Restores model from give path. - -##### Args: - - -* `path`: Path to the checkpoints and other model information. -* `config`: RunConfig object that controls the configurations of the session, - e.g. num_cores, gpu_memory_fraction, etc. This is allowed to be - reconfigured. - -##### Returns: - - Estimator, object of the subclass of TensorFlowEstimator. - - -- - - - -#### `tf.contrib.learn.TensorFlowLinearRegressor.save(path)` {#TensorFlowLinearRegressor.save} - -Saves checkpoints and graph to given path. - -##### Args: - - -* `path`: Folder to save model to. - - -- - - - -#### `tf.contrib.learn.TensorFlowLinearRegressor.set_params(**params)` {#TensorFlowLinearRegressor.set_params} - -Set the parameters of this estimator. - -The method works on simple estimators as well as on nested objects -(such as pipelines). The former have parameters of the form -``__`` so that it's possible to update each -component of a nested object. - -##### Returns: - - self - - -- - - - -#### `tf.contrib.learn.TensorFlowLinearRegressor.train(input_fn, steps, monitors=None)` {#TensorFlowLinearRegressor.train} - -Trains a model given input builder function. - -##### Args: - - -* `input_fn`: Input builder function, returns tuple of dicts or - dict and Tensor. -* `steps`: number of steps to train model for. -* `monitors`: List of `BaseMonitor` subclass instances. Used for callbacks - inside the training loop. - -##### Returns: - - Returns self. - - -- - - - -#### `tf.contrib.learn.TensorFlowLinearRegressor.weights_` {#TensorFlowLinearRegressor.weights_} - -Returns weights of the linear regression. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.extract_pandas_data.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.extract_pandas_data.md deleted file mode 100644 index 82703a8097..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.extract_pandas_data.md +++ /dev/null @@ -1,4 +0,0 @@ -### `tf.contrib.learn.extract_pandas_data(data)` {#extract_pandas_data} - -Extract data from pandas.DataFrame for predictors - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.extract_pandas_matrix.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.extract_pandas_matrix.md new file mode 100644 index 0000000000..c2a275bc66 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.extract_pandas_matrix.md @@ -0,0 +1,4 @@ +### `tf.contrib.learn.extract_pandas_matrix(data)` {#extract_pandas_matrix} + +Extracts numpy matrix from pandas DataFrame. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.read_batch_features.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.read_batch_features.md new file mode 100644 index 0000000000..75b40f7e75 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.read_batch_features.md @@ -0,0 +1,43 @@ +### `tf.contrib.learn.read_batch_features(file_pattern, batch_size, features, reader, randomize_input=True, num_epochs=None, queue_capacity=10000, reader_num_threads=1, parser_num_threads=1, name=None)` {#read_batch_features} + +Adds operations to read, queue, batch and parse `Example` protos. + +Given file pattern (or list of files), will setup a queue for file names, +read `Example` proto using provided `reader`, use batch queue to create +batches of examples of size `batch_size` and parse example given `features` +specification. + +All queue runners are added to the queue runners collection, and may be +started via `start_queue_runners`. + +All ops are added to the default graph. + +##### Args: + + +* `file_pattern`: List of files or pattern of file paths containing + `Example` records. See `tf.gfile.Glob` for pattern rules. +* `batch_size`: An int or scalar `Tensor` specifying the batch size to use. +* `features`: A `dict` mapping feature keys to `FixedLenFeature` or + `VarLenFeature` values. +* `reader`: A function or class that returns an object with + `read` method, (filename tensor) -> (example tensor). +* `randomize_input`: Whether the input should be randomized. +* `num_epochs`: Integer specifying the number of times to read through the + dataset. If None, cycles through the dataset forever. NOTE - If specified, + creates a variable that must be initialized, so call + tf.initialize_all_variables() as shown in the tests. +* `queue_capacity`: Capacity for input queue. +* `reader_num_threads`: The number of threads to read examples. +* `parser_num_threads`: The number of threads to parse examples. +* `name`: Name of resulting op. + +##### Returns: + + A dict of `Tensor` or `SparseTensor` objects for each in `features`. + +##### Raises: + + +* `ValueError`: for invalid inputs. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.metrics.streaming_accuracy.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.metrics.streaming_accuracy.md deleted file mode 100644 index 684d1849d3..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.metrics.streaming_accuracy.md +++ /dev/null @@ -1,51 +0,0 @@ -### `tf.contrib.metrics.streaming_accuracy(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_accuracy} - -Calculates how often `predictions` matches `labels`. - -The `streaming_accuracy` function creates two local variables, `total` and -`count` that are used to compute the frequency with which `predictions` -matches `labels`. This frequency is ultimately returned as `accuracy`: an -idempotent operation that simply divides `total` by `count`. -To facilitate the estimation of the accuracy over a stream of data, the -function utilizes two operations. First, an `is_correct` operation that -computes a tensor whose shape matches `predictions` and whose elements are -set to 1.0 when the corresponding values of `predictions` and `labels match -and 0.0 otherwise. Second, an `update_op` operation whose behavior is -dependent on the value of `weights`. If `weights` is None, then `update_op` -increments `total` with the number of elements of `predictions` that match -`labels` and increments `count` with the number of elements in `values`. If -`weights` is not `None`, then `update_op` increments `total` with the reduced -sum of the product of `weights` and `is_correct` and increments `count` with -the reduced sum of `weights`. In addition to performing the updates, -`update_op` also returns the `accuracy` value. - -##### Args: - - -* `predictions`: The predicted values, a `Tensor` of any shape. -* `labels`: The ground truth values, a `Tensor` whose shape matches - `predictions`. -* `weights`: An optional set of weights whose shape matches `predictions` - which, when not `None`, produces a weighted mean accuracy. -* `metrics_collections`: An optional list of collections that `accuracy` should - be added to. -* `updates_collections`: An optional list of collections that `update_op` should - be added to. -* `name`: An optional variable_op_scope name. - -##### Returns: - - -* `accuracy`: A tensor representing the accuracy, the value of `total` divided - by `count`. -* `update_op`: An operation that increments the `total` and `count` variables - appropriately and whose value matches `accuracy`. - -##### Raises: - - -* `ValueError`: If the dimensions of `predictions` and `labels` don't match or - if `weight` is not `None` and its shape doesn't match `predictions` or - if either `metrics_collections` or `updates_collections` are not - a list or tuple. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.metrics.streaming_mean_relative_error.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.metrics.streaming_mean_relative_error.md new file mode 100644 index 0000000000..3740bbbaad --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.metrics.streaming_mean_relative_error.md @@ -0,0 +1,49 @@ +### `tf.contrib.metrics.streaming_mean_relative_error(predictions, labels, normalizer, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_mean_relative_error} + +Computes the mean relative error by normalizing with the given values. + +The `streaming_mean_relative_error` function creates two local variables, +`total` and `count` that are used to compute the mean relative absolute error. +This average is ultimately returned as `mean_relative_error`: an idempotent +operation that simply divides `total` by `count`. To facilitate the estimation +of the mean relative error over a stream of data, the function utilizes two +operations. First, a `relative_errors` operation divides the absolute value +of the differences between `predictions` and `labels` by the `normalizer`. +Second, an `update_op` operation whose behavior is dependent on the value of +`weights`. If `weights` is None, then `update_op` increments `total` with the +reduced sum of `relative_errors` and increments `count` with the number of +elements in `relative_errors`. If `weights` is not `None`, then `update_op` +increments `total` with the reduced sum of the product of `weights` and +`relative_errors` and increments `count` with the reduced sum of `weights`. In +addition to performing the updates, `update_op` also returns the +`mean_relative_error` value. + +##### Args: + + +* `predictions`: A `Tensor` of arbitrary shape. +* `labels`: A `Tensor` of the same shape as `predictions`. +* `normalizer`: A `Tensor` of the same shape as `predictions`. +* `weights`: An optional set of weights of the same shape as `predictions`. If + `weights` is not None, the function computes a weighted mean. +* `metrics_collections`: An optional list of collections that + `mean_relative_error` should be added to. +* `updates_collections`: An optional list of collections that `update_op` should + be added to. +* `name`: An optional variable_op_scope name. + +##### Returns: + + +* `mean_relative_error`: A tensor representing the current mean, the value of + `total` divided by `count`. +* `update_op`: An operation that increments the `total` and `count` variables + appropriately and whose value matches `mean_relative_error`. + +##### Raises: + + +* `ValueError`: If `weights` is not `None` and its shape doesn't match + `predictions` or if either `metrics_collections` or `updates_collections` + are not a list or tuple. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.metrics.streaming_mean_squared_error.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.metrics.streaming_mean_squared_error.md new file mode 100644 index 0000000000..6d682d0427 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.metrics.streaming_mean_squared_error.md @@ -0,0 +1,48 @@ +### `tf.contrib.metrics.streaming_mean_squared_error(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_mean_squared_error} + +Computes the mean squared error between the labels and predictions. + +The `streaming_mean_squared_error` function creates two local variables, +`total` and `count` that are used to compute the mean squared error. +This average is ultimately returned as `mean_squared_error`: an idempotent +operation that simply divides `total` by `count`. To facilitate the estimation +of the mean squared error over a stream of data, the function utilizes two +operations. First, a `squared_error` operation computes the element-wise +square of the difference between `predictions` and `labels`. Second, an +`update_op` operation whose behavior is dependent on the value of `weights`. +If `weights` is None, then `update_op` increments `total` with the +reduced sum of `squared_error` and increments `count` with the number of +elements in `squared_error`. If `weights` is not `None`, then `update_op` +increments `total` with the reduced sum of the product of `weights` and +`squared_error` and increments `count` with the reduced sum of `weights`. In +addition to performing the updates, `update_op` also returns the +`mean_squared_error` value. + +##### Args: + + +* `predictions`: A `Tensor` of arbitrary shape. +* `labels`: A `Tensor` of the same shape as `predictions`. +* `weights`: An optional set of weights of the same shape as `predictions`. If + `weights` is not None, the function computes a weighted mean. +* `metrics_collections`: An optional list of collections that + `mean_squared_error` should be added to. +* `updates_collections`: An optional list of collections that `update_op` should + be added to. +* `name`: An optional variable_op_scope name. + +##### Returns: + + +* `mean_squared_error`: A tensor representing the current mean, the value of + `total` divided by `count`. +* `update_op`: An operation that increments the `total` and `count` variables + appropriately and whose value matches `mean_squared_error`. + +##### Raises: + + +* `ValueError`: If `weights` is not `None` and its shape doesn't match + `predictions` or if either `metrics_collections` or `updates_collections` + are not a list or tuple. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.metrics.streaming_recall_at_k.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.metrics.streaming_recall_at_k.md new file mode 100644 index 0000000000..dd03b95b69 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.metrics.streaming_recall_at_k.md @@ -0,0 +1,52 @@ +### `tf.contrib.metrics.streaming_recall_at_k(predictions, labels, k, ignore_mask=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_recall_at_k} + +Computes the recall@k of the predictions with respect to dense labels. + +The `streaming_recall_at_k` function creates two local variables, `total` and +`count`, that are used to compute the recall@k frequency. This frequency is +ultimately returned as `recall_at_`: an idempotent operation that simply +divides `total` by `count`. To facilitate the estimation of recall@k over a +stream of data, the function utilizes two operations. First, an `in_top_k` +operation computes a tensor with shape [batch_size] whose elements indicate +whether or not the corresponding label is in the top `k` predictions of the +`predictions` `Tensor`. Second, an `update_op` operation whose behavior is +dependent on the value of `ignore_mask`. If `ignore_mask` is None, then +`update_op` increments `total` with the number of elements of `in_top_k` that +are set to `True` and increments `count` with the batch size. If `ignore_mask` +is not `None`, then `update_op` increments `total` with the number of elements +in `in_top_k` that are `True` whose corresponding element in `ignore_mask` is +`False`. In addition to performing the updates, `update_op` also returns the +recall value. + +##### Args: + + +* `predictions`: A floating point tensor of dimension [batch_size, num_classes] +* `labels`: A tensor of dimension [batch_size] whose type is in `int32`, + `int64`. +* `k`: The number of top elements to look at for computing precision. +* `ignore_mask`: An optional, binary tensor whose size matches `labels`. If an + element of `ignore_mask` is True, the corresponding prediction and label + pair is used to compute the metrics. Otherwise, the pair is ignored. +* `metrics_collections`: An optional list of collections that `recall_at_k` + should be added to. +* `updates_collections`: An optional list of collections `update_op` should be + added to. +* `name`: An optional variable_op_scope name. + +##### Returns: + + +* `recall_at_k`: A tensor representing the recall@k, the fraction of labels + which fall into the top `k` predictions. +* `update_op`: An operation that increments the `total` and `count` variables + appropriately and whose value matches `recall_at_k`. + +##### Raises: + + +* `ValueError`: If the dimensions of `predictions` and `labels` don't match or + if `ignore_mask` is not `None` and its shape doesn't match `predictions` + or if either `metrics_collections` or `updates_collections` are not a list + or tuple. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.util.constant_value.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.util.constant_value.md deleted file mode 100644 index 58ba7b0abb..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.util.constant_value.md +++ /dev/null @@ -1,31 +0,0 @@ -### `tf.contrib.util.constant_value(tensor)` {#constant_value} - -Returns the constant value of the given tensor, if efficiently calculable. - -This function attempts to partially evaluate the given tensor, and -returns its value as a numpy ndarray if this succeeds. - -TODO(mrry): Consider whether this function should use a registration -mechanism like gradients and ShapeFunctions, so that it is easily -extensible. - -NOTE: If `constant_value(tensor)` returns a non-`None` result, it will no -longer be possible to feed a different value for `tensor`. This allows the -result of this function to influence the graph that is constructed, and -permits static shape optimizations. - -##### Args: - - -* `tensor`: The Tensor to be evaluated. - -##### Returns: - - A numpy ndarray containing the constant value of the given `tensor`, - or None if it cannot be calculated. - -##### Raises: - - -* `TypeError`: if tensor is not an ops.Tensor. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.cross.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.cross.md new file mode 100644 index 0000000000..eecf2e869b --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.cross.md @@ -0,0 +1,22 @@ +### `tf.cross(a, b, name=None)` {#cross} + +Compute the pairwise cross product. + +`a` and `b` must be the same shape; they can either be simple 3-element vectors, +or any shape where the innermost dimension is 3. In the latter case, each pair +of corresponding 3-element vectors is cross-multiplied independently. + +##### Args: + + +* `a`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`. + A tensor containing 3-element vectors. +* `b`: A `Tensor`. Must have the same type as `a`. + Another tensor, of same type and shape as `a`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `a`. + Pairwise cross product of the vectors in `a` and `b`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.decode_json_example.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.decode_json_example.md new file mode 100644 index 0000000000..bf5184c40a --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.decode_json_example.md @@ -0,0 +1,25 @@ +### `tf.decode_json_example(json_examples, name=None)` {#decode_json_example} + +Convert JSON-encoded Example records to binary protocol buffer strings. + +This op translates a tensor containing Example records, encoded using +the [standard JSON +mapping](https://developers.google.com/protocol-buffers/docs/proto3#json), +into a tensor containing the same records encoded as binary protocol +buffers. The resulting tensor can then be fed to any of the other +Example-parsing ops. + +##### Args: + + +* `json_examples`: A `Tensor` of type `string`. + Each string is a JSON object serialized according to the JSON + mapping of the Example proto. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `string`. + Each string is a binary Example protocol buffer corresponding + to the respective element of `json_examples`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.delete_session_tensor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.delete_session_tensor.md new file mode 100644 index 0000000000..2f52941c5f --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.delete_session_tensor.md @@ -0,0 +1,19 @@ +### `tf.delete_session_tensor(name=None)` {#delete_session_tensor} + +Delete the tensor by feeding a tensor handle. + +This is EXPERIMENTAL and subject to change. + +Delete the tensor of a given tensor handle. The tensor is produced +in a previous run() and stored in the state of the session. + +##### Args: + + +* `name`: Optional name prefix for the return tensor. + +##### Returns: + + A pair of graph elements. The first is a placeholder for feeding a + tensor handle and the second is a deletion operation. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.div.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.div.md deleted file mode 100644 index 92eba7927a..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.div.md +++ /dev/null @@ -1,15 +0,0 @@ -### `tf.div(x, y, name=None)` {#div} - -Returns x / y element-wise. - -##### Args: - - -* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`. -* `y`: A `Tensor`. Must have the same type as `x`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `x`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.errors.CancelledError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.errors.CancelledError.md new file mode 100644 index 0000000000..cf20c0e2e3 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.errors.CancelledError.md @@ -0,0 +1,17 @@ +Raised when an operation or step is cancelled. + +For example, a long-running operation (e.g. +[`queue.enqueue()`](../../api_docs/python/io_ops.md#QueueBase.enqueue) may be +cancelled by running another operation (e.g. +[`queue.close(cancel_pending_enqueues=True)`](../../api_docs/python/io_ops.md#QueueBase.close), +or by [closing the session](../../api_docs/python/client.md#Session.close). +A step that is running such a long-running operation will fail by raising +`CancelledError`. + +- - - + +#### `tf.errors.CancelledError.__init__(node_def, op, message)` {#CancelledError.__init__} + +Creates a `CancelledError`. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.errors.DataLossError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.errors.DataLossError.md deleted file mode 100644 index 3193e77ae3..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.errors.DataLossError.md +++ /dev/null @@ -1,13 +0,0 @@ -Raised when unrecoverable data loss or corruption is encountered. - -For example, this may be raised by running a -[`tf.WholeFileReader.read()`](../../api_docs/python/io_ops.md#WholeFileReader) -operation, if the file is truncated while it is being read. - -- - - - -#### `tf.errors.DataLossError.__init__(node_def, op, message)` {#DataLossError.__init__} - -Creates a `DataLossError`. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.errors.FailedPreconditionError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.errors.FailedPreconditionError.md deleted file mode 100644 index 1cbd338bf9..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.errors.FailedPreconditionError.md +++ /dev/null @@ -1,13 +0,0 @@ -Operation was rejected because the system is not in a state to execute it. - -This exception is most commonly raised when running an operation -that reads a [`tf.Variable`](../../api_docs/python/state_ops.md#Variable) -before it has been initialized. - -- - - - -#### `tf.errors.FailedPreconditionError.__init__(node_def, op, message)` {#FailedPreconditionError.__init__} - -Creates a `FailedPreconditionError`. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.fft.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.fft.md new file mode 100644 index 0000000000..5a2c3c635d --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.fft.md @@ -0,0 +1,14 @@ +### `tf.fft(input, name=None)` {#fft} + +Compute the 1-dimensional discrete Fourier Transform. + +##### Args: + + +* `input`: A `Tensor` of type `complex64`. A complex64 vector. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `complex64`. The 1D Fourier Transform of `input`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.get_collection.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.get_collection.md new file mode 100644 index 0000000000..fc0044b490 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.get_collection.md @@ -0,0 +1,25 @@ +### `tf.get_collection(key, scope=None)` {#get_collection} + +Wrapper for `Graph.get_collection()` using the default graph. + +See [`Graph.get_collection()`](../../api_docs/python/framework.md#Graph.get_collection) +for more details. + +##### Args: + + +* `key`: The key for the collection. For example, the `GraphKeys` class + contains many standard names for collections. +* `scope`: (Optional.) If supplied, the resulting list is filtered to include + only items whose `name` attribute matches using `re.match`. Items + without a `name` attribute are never returned if a scope is supplied and + the choice or `re.match` means that a `scope` without special tokens + filters by prefix. + +##### Returns: + + The list of values in the collection with the given `name`, or + an empty list if no value has been added to that collection. The + list contains the values in the order under which they were + collected. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.group.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.group.md deleted file mode 100644 index 7958cf9e58..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.group.md +++ /dev/null @@ -1,25 +0,0 @@ -### `tf.group(*inputs, **kwargs)` {#group} - -Create an op that groups multiple operations. - -When this op finishes, all ops in `input` have finished. This op has no -output. - -See also `tuple` and `with_dependencies`. - -##### Args: - - -* `*inputs`: Zero or more tensors to group. -* `**kwargs`: Optional parameters to pass when constructing the NodeDef. -* `name`: A name for this operation (optional). - -##### Returns: - - An Operation that executes all its inputs. - -##### Raises: - - -* `ValueError`: If an unknown keyword argument is provided. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.histogram_summary.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.histogram_summary.md deleted file mode 100644 index 1ede11e820..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.histogram_summary.md +++ /dev/null @@ -1,25 +0,0 @@ -### `tf.histogram_summary(tag, values, collections=None, name=None)` {#histogram_summary} - -Outputs a `Summary` protocol buffer with a histogram. - -The generated -[`Summary`](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto) -has one summary value containing a histogram for `values`. - -This op reports an `InvalidArgument` error if any value is not finite. - -##### Args: - - -* `tag`: A `string` `Tensor`. 0-D. Tag to use for the summary value. -* `values`: A real numeric `Tensor`. Any shape. Values to use to - build the histogram. -* `collections`: Optional list of graph collections keys. The new summary op is - added to these collections. Defaults to `[GraphKeys.SUMMARIES]`. -* `name`: A name for the operation (optional). - -##### Returns: - - A scalar `Tensor` of type `string`. The serialized `Summary` protocol - buffer. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.igamma.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.igamma.md deleted file mode 100644 index 1cf6860651..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.igamma.md +++ /dev/null @@ -1,29 +0,0 @@ -### `tf.igamma(a, x, name=None)` {#igamma} - -Compute the lower regularized incomplete Gamma function `Q(a, x)`. - -The lower regularized incomplete Gamma function is defined as: - -``` -P(a, x) = gamma(a, x) / Gamma(x) = 1 - Q(a, x) -``` -where -``` -gamma(a, x) = int_{0}^{x} t^{a-1} exp(-t) dt -``` -is the lower incomplete Gamma function. - -Note, above `Q(a, x)` (`Igammac`) is the upper regularized complete -Gamma function. - -##### Args: - - -* `a`: A `Tensor`. Must be one of the following types: `float32`, `float64`. -* `x`: A `Tensor`. Must have the same type as `a`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `a`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.convert_image_dtype.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.convert_image_dtype.md deleted file mode 100644 index 63db6f36a9..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.convert_image_dtype.md +++ /dev/null @@ -1,32 +0,0 @@ -### `tf.image.convert_image_dtype(image, dtype, saturate=False, name=None)` {#convert_image_dtype} - -Convert `image` to `dtype`, scaling its values if needed. - -Images that are represented using floating point values are expected to have -values in the range [0,1). Image data stored in integer data types are -expected to have values in the range `[0,MAX]`, where `MAX` is the largest -positive representable number for the data type. - -This op converts between data types, scaling the values appropriately before -casting. - -Note that converting from floating point inputs to integer types may lead to -over/underflow problems. Set saturate to `True` to avoid such problem in -problematic conversions. If enabled, saturation will clip the output into the -allowed range before performing a potentially dangerous cast (and only before -performing such a cast, i.e., when casting from a floating point to an integer -type, and when casting from a signed to an unsigned type; `saturate` has no -effect on casts between floats, or on casts that increase the type's range). - -##### Args: - - -* `image`: An image. -* `dtype`: A `DType` to convert `image` to. -* `saturate`: If `True`, clip the input before casting (if necessary). -* `name`: A name for this operation (optional). - -##### Returns: - - `image`, converted to `dtype`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.decode_jpeg.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.decode_jpeg.md new file mode 100644 index 0000000000..f4c6f1340a --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.decode_jpeg.md @@ -0,0 +1,41 @@ +### `tf.image.decode_jpeg(contents, channels=None, ratio=None, fancy_upscaling=None, try_recover_truncated=None, acceptable_fraction=None, name=None)` {#decode_jpeg} + +Decode a JPEG-encoded image to a uint8 tensor. + +The attr `channels` indicates the desired number of color channels for the +decoded image. + +Accepted values are: + +* 0: Use the number of channels in the JPEG-encoded image. +* 1: output a grayscale image. +* 3: output an RGB image. + +If needed, the JPEG-encoded image is transformed to match the requested number +of color channels. + +The attr `ratio` allows downscaling the image by an integer factor during +decoding. Allowed values are: 1, 2, 4, and 8. This is much faster than +downscaling the image later. + +##### Args: + + +* `contents`: A `Tensor` of type `string`. 0-D. The JPEG-encoded image. +* `channels`: An optional `int`. Defaults to `0`. + Number of color channels for the decoded image. +* `ratio`: An optional `int`. Defaults to `1`. Downscaling ratio. +* `fancy_upscaling`: An optional `bool`. Defaults to `True`. + If true use a slower but nicer upscaling of the + chroma planes (yuv420/422 only). +* `try_recover_truncated`: An optional `bool`. Defaults to `False`. + If true try to recover an image from truncated input. +* `acceptable_fraction`: An optional `float`. Defaults to `1`. + The minimum required fraction of lines before a truncated + input is accepted. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `uint8`. 3-D with shape `[height, width, channels]`.. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.flip_left_right.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.flip_left_right.md deleted file mode 100644 index ac8c99806e..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.flip_left_right.md +++ /dev/null @@ -1,23 +0,0 @@ -### `tf.image.flip_left_right(image)` {#flip_left_right} - -Flip an image horizontally (left to right). - -Outputs the contents of `image` flipped along the second dimension, which is -`width`. - -See also `reverse()`. - -##### Args: - - -* `image`: A 3-D tensor of shape `[height, width, channels].` - -##### Returns: - - A 3-D tensor of the same type and shape as `image`. - -##### Raises: - - -* `ValueError`: if the shape of `image` not supported. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.flip_up_down.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.flip_up_down.md new file mode 100644 index 0000000000..ed92277f8a --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.flip_up_down.md @@ -0,0 +1,23 @@ +### `tf.image.flip_up_down(image)` {#flip_up_down} + +Flip an image horizontally (upside down). + +Outputs the contents of `image` flipped along the first dimension, which is +`height`. + +See also `reverse()`. + +##### Args: + + +* `image`: A 3-D tensor of shape `[height, width, channels].` + +##### Returns: + + A 3-D tensor of the same type and shape as `image`. + +##### Raises: + + +* `ValueError`: if the shape of `image` not supported. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.hsv_to_rgb.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.hsv_to_rgb.md deleted file mode 100644 index 3193dd9c60..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.hsv_to_rgb.md +++ /dev/null @@ -1,21 +0,0 @@ -### `tf.image.hsv_to_rgb(images, name=None)` {#hsv_to_rgb} - -Convert one or more images from HSV to RGB. - -Outputs a tensor of the same shape as the `images` tensor, containing the RGB -value of the pixels. The output is only well defined if the value in `images` -are in `[0,1]`. - -See `rgb_to_hsv` for a description of the HSV encoding. - -##### Args: - - -* `images`: A `Tensor` of type `float32`. - 1-D or higher rank. HSV data to convert. Last dimension must be size 3. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `float32`. `images` converted to RGB. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.random_brightness.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.random_brightness.md new file mode 100644 index 0000000000..6c773b6985 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.random_brightness.md @@ -0,0 +1,25 @@ +### `tf.image.random_brightness(image, max_delta, seed=None)` {#random_brightness} + +Adjust the brightness of images by a random factor. + +Equivalent to `adjust_brightness()` using a `delta` randomly picked in the +interval `[-max_delta, max_delta)`. + +##### Args: + + +* `image`: An image. +* `max_delta`: float, must be non-negative. +* `seed`: A Python integer. Used to create a random seed. See + [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) + for behavior. + +##### Returns: + + The brightness-adjusted image. + +##### Raises: + + +* `ValueError`: if `max_delta` is negative. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.random_saturation.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.random_saturation.md deleted file mode 100644 index 397bfc4d0b..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.random_saturation.md +++ /dev/null @@ -1,27 +0,0 @@ -### `tf.image.random_saturation(image, lower, upper, seed=None)` {#random_saturation} - -Adjust the saturation of an RGB image by a random factor. - -Equivalent to `adjust_saturation()` but uses a `saturation_factor` randomly -picked in the interval `[lower, upper]`. - -##### Args: - - -* `image`: RGB image or images. Size of the last dimension must be 3. -* `lower`: float. Lower bound for the random saturation factor. -* `upper`: float. Upper bound for the random saturation factor. -* `seed`: An operation-specific seed. It will be used in conjunction - with the graph-level seed to determine the real seeds that will be - used in this operation. Please see the documentation of - set_random_seed for its interaction with the graph-level random seed. - -##### Returns: - - Adjusted image(s), same shape and DType as `image`. - -##### Raises: - - -* `ValueError`: if `upper <= lower` or if `lower < 0`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.resize_area.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.resize_area.md deleted file mode 100644 index dbc6fd1bcd..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.resize_area.md +++ /dev/null @@ -1,24 +0,0 @@ -### `tf.image.resize_area(images, size, align_corners=None, name=None)` {#resize_area} - -Resize `images` to `size` using area interpolation. - -Input images can be of different types but output images are always float. - -##### Args: - - -* `images`: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int16`, `int32`, `int64`, `half`, `float32`, `float64`. - 4-D with shape `[batch, height, width, channels]`. -* `size`: A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The - new size for the images. -* `align_corners`: An optional `bool`. Defaults to `False`. - If true, rescale input by (new_height - 1) / (height - 1), which - exactly aligns the 4 corners of images and resized images. If false, rescale - by new_height / height. Treat similarly the width dimension. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `float32`. 4-D with shape - `[batch, new_height, new_width, channels]`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.resize_bicubic.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.resize_bicubic.md new file mode 100644 index 0000000000..1805c7423d --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.resize_bicubic.md @@ -0,0 +1,24 @@ +### `tf.image.resize_bicubic(images, size, align_corners=None, name=None)` {#resize_bicubic} + +Resize `images` to `size` using bicubic interpolation. + +Input images can be of different types but output images are always float. + +##### Args: + + +* `images`: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int16`, `int32`, `int64`, `half`, `float32`, `float64`. + 4-D with shape `[batch, height, width, channels]`. +* `size`: A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The + new size for the images. +* `align_corners`: An optional `bool`. Defaults to `False`. + If true, rescale input by (new_height - 1) / (height - 1), which + exactly aligns the 4 corners of images and resized images. If false, rescale + by new_height / height. Treat similarly the width dimension. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `float32`. 4-D with shape + `[batch, new_height, new_width, channels]`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.import_graph_def.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.import_graph_def.md deleted file mode 100644 index 0ff3d621d4..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.import_graph_def.md +++ /dev/null @@ -1,49 +0,0 @@ -### `tf.import_graph_def(graph_def, input_map=None, return_elements=None, name=None, op_dict=None, producer_op_list=None)` {#import_graph_def} - -Imports the TensorFlow graph in `graph_def` into the Python `Graph`. - -This function provides a way to import a serialized TensorFlow -[`GraphDef`](https://www.tensorflow.org/code/tensorflow/core/framework/graph.proto) -protocol buffer, and extract individual objects in the `GraphDef` as -[`Tensor`](#Tensor) and [`Operation`](#Operation) objects. See -[`Graph.as_graph_def()`](#Graph.as_graph_def) for a way to create a -`GraphDef` proto. - -##### Args: - - -* `graph_def`: A `GraphDef` proto containing operations to be imported into - the default graph. -* `input_map`: A dictionary mapping input names (as strings) in `graph_def` - to `Tensor` objects. The values of the named input tensors in the - imported graph will be re-mapped to the respective `Tensor` values. -* `return_elements`: A list of strings containing operation names in - `graph_def` that will be returned as `Operation` objects; and/or - tensor names in `graph_def` that will be returned as `Tensor` objects. -* `name`: (Optional.) A prefix that will be prepended to the names in - `graph_def`. Defaults to `"import"`. -* `op_dict`: (Optional.) A dictionary mapping op type names to `OpDef` protos. - Must contain an `OpDef` proto for each op type named in `graph_def`. - If omitted, uses the `OpDef` protos registered in the global registry. -* `producer_op_list`: (Optional.) An `OpList` proto with the (possibly stripped) - list of `OpDef`s used by the producer of the graph. If provided, attrs - for ops in `graph_def` that are not in `op_dict` that have their default - value according to `producer_op_list` will be removed. This will allow - some more `GraphDef`s produced by later binaries to be accepted by - earlier binaries. - -##### Returns: - - A list of `Operation` and/or `Tensor` objects from the imported graph, - corresponding to the names in `return_elements`. - -##### Raises: - - -* `TypeError`: If `graph_def` is not a `GraphDef` proto, - `input_map` is not a dictionary mapping strings to `Tensor` objects, - or `return_elements` is not a list of strings. -* `ValueError`: If `input_map`, or `return_elements` contains names that - do not appear in `graph_def`, or `graph_def` is not well-formed (e.g. - it refers to an unknown tensor). - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.initialize_variables.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.initialize_variables.md deleted file mode 100644 index 8941ab4853..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.initialize_variables.md +++ /dev/null @@ -1,24 +0,0 @@ -### `tf.initialize_variables(var_list, name='init')` {#initialize_variables} - -Returns an Op that initializes a list of variables. - -After you launch the graph in a session, you can run the returned Op to -initialize all the variables in `var_list`. This Op runs all the -initializers of the variables in `var_list` in parallel. - -Calling `initialize_variables()` is equivalent to passing the list of -initializers to `Group()`. - -If `var_list` is empty, however, the function still returns an Op that can -be run. That Op just has no effect. - -##### Args: - - -* `var_list`: List of `Variable` objects to initialize. -* `name`: Optional name for the returned operation. - -##### Returns: - - An Op that run the initializers of all the specified variables. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.is_inf.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.is_inf.md deleted file mode 100644 index 8955d5c9cc..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.is_inf.md +++ /dev/null @@ -1,14 +0,0 @@ -### `tf.is_inf(x, name=None)` {#is_inf} - -Returns which elements of x are Inf. - -##### Args: - - -* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `bool`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.linspace.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.linspace.md new file mode 100644 index 0000000000..570845f502 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.linspace.md @@ -0,0 +1,28 @@ +### `tf.linspace(start, stop, num, name=None)` {#linspace} + +Generates values in an interval. + +A sequence of `num` evenly-spaced values are generated beginning at `start`. +If `num > 1`, the values in the sequence increase by `stop - start / num - 1`, +so that the last one is exactly `stop`. + +For example: + +``` +tf.linspace(10.0, 12.0, 3, name="linspace") => [ 10.0 11.0 12.0] +``` + +##### Args: + + +* `start`: A `Tensor`. Must be one of the following types: `float32`, `float64`. + First entry in the range. +* `stop`: A `Tensor`. Must have the same type as `start`. + Last entry in the range. +* `num`: A `Tensor` of type `int32`. Number of values to generate. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `start`. 1-D. The generated values. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.listdiff.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.listdiff.md new file mode 100644 index 0000000000..1f04bd8d9e --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.listdiff.md @@ -0,0 +1,40 @@ +### `tf.listdiff(x, y, name=None)` {#listdiff} + +Computes the difference between two lists of numbers or strings. + +Given a list `x` and a list `y`, this operation returns a list `out` that +represents all values that are in `x` but not in `y`. The returned list `out` +is sorted in the same order that the numbers appear in `x` (duplicates are +preserved). This operation also returns a list `idx` that represents the +position of each `out` element in `x`. In other words: + +`out[i] = x[idx[i]] for i in [0, 1, ..., len(out) - 1]` + +For example, given this input: + +```prettyprint +x = [1, 2, 3, 4, 5, 6] +y = [1, 3, 5] +``` + +This operation would return: + +```prettyprint +out ==> [2, 4, 6] +idx ==> [1, 3, 5] +``` + +##### Args: + + +* `x`: A `Tensor`. 1-D. Values to keep. +* `y`: A `Tensor`. Must have the same type as `x`. 1-D. Values to remove. +* `name`: A name for the operation (optional). + +##### Returns: + + A tuple of `Tensor` objects (out, idx). + +* `out`: A `Tensor`. Has the same type as `x`. 1-D. Values present in `x` but not in `y`. +* `idx`: A `Tensor` of type `int32`. 1-D. Positions of `x` values preserved in `out`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.local_variables.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.local_variables.md deleted file mode 100644 index b3612c7cbf..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.local_variables.md +++ /dev/null @@ -1,8 +0,0 @@ -### `tf.local_variables()` {#local_variables} - -Returns all variables created with collection=[LOCAL_VARIABLES]. - -##### Returns: - - A list of local Variable objects. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.log.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.log.md new file mode 100644 index 0000000000..4ce9ddac8c --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.log.md @@ -0,0 +1,16 @@ +### `tf.log(x, name=None)` {#log} + +Computes natural logarithm of x element-wise. + +I.e., \\(y = \log_e x\\). + +##### Args: + + +* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `x`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.matrix_solve.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.matrix_solve.md new file mode 100644 index 0000000000..b33decd2e9 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.matrix_solve.md @@ -0,0 +1,21 @@ +### `tf.matrix_solve(matrix, rhs, adjoint=None, name=None)` {#matrix_solve} + +Solves a system of linear equations. Checks for invertibility. + +##### Args: + + +* `matrix`: A `Tensor`. Must be one of the following types: `float64`, `float32`. + Shape is `[M, M]`. +* `rhs`: A `Tensor`. Must have the same type as `matrix`. Shape is `[M, K]`. +* `adjoint`: An optional `bool`. Defaults to `False`. + Boolean indicating whether to solve with `matrix` or its adjoint. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `matrix`. + Shape is `[M, K]`. If `adjoint` is `False` then `output` that solves + `matrix` * `output` = `rhs`. If `adjoint` is `True` then `output` that solves + `adjoint(matrix)` * `output` = `rhs`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.maximum.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.maximum.md new file mode 100644 index 0000000000..309946f435 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.maximum.md @@ -0,0 +1,15 @@ +### `tf.maximum(x, y, name=None)` {#maximum} + +Returns the max of x and y (i.e. x > y ? x : y) element-wise, broadcasts. + +##### Args: + + +* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`. +* `y`: A `Tensor`. Must have the same type as `x`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `x`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.merge_summary.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.merge_summary.md deleted file mode 100644 index b61a501c2d..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.merge_summary.md +++ /dev/null @@ -1,26 +0,0 @@ -### `tf.merge_summary(inputs, collections=None, name=None)` {#merge_summary} - -Merges summaries. - -This op creates a -[`Summary`](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto) -protocol buffer that contains the union of all the values in the input -summaries. - -When the Op is run, it reports an `InvalidArgument` error if multiple values -in the summaries to merge use the same tag. - -##### Args: - - -* `inputs`: A list of `string` `Tensor` objects containing serialized `Summary` - protocol buffers. -* `collections`: Optional list of graph collections keys. The new summary op is - added to these collections. Defaults to `[GraphKeys.SUMMARIES]`. -* `name`: A name for the operation (optional). - -##### Returns: - - A scalar `Tensor` of type `string`. The serialized `Summary` protocol - buffer resulting from the merging. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.mul.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.mul.md new file mode 100644 index 0000000000..3d6fa56864 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.mul.md @@ -0,0 +1,15 @@ +### `tf.mul(x, y, name=None)` {#mul} + +Returns x * y element-wise. + +##### Args: + + +* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`. +* `y`: A `Tensor`. Must have the same type as `x`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `x`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.atrous_conv2d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.atrous_conv2d.md new file mode 100644 index 0000000000..cf4c473689 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.atrous_conv2d.md @@ -0,0 +1,107 @@ +### `tf.nn.atrous_conv2d(value, filters, rate, padding, name=None)` {#atrous_conv2d} + +Atrous convolution (a.k.a. convolution with holes or dilated convolution). + +Computes a 2-D atrous convolution, also known as convolution with holes or +dilated convolution, given 4-D `value` and `filters` tensors. If the `rate` +parameter is equal to one, it performs regular 2-D convolution. If the `rate` +parameter is greater than one, it performs convolution with holes, sampling +the input values every `rate` pixels in the `height` and `width` dimensions. +This is equivalent to convolving the input with a set of upsampled filters, +produced by inserting `rate - 1` zeros between two consecutive values of the +filters along the `height` and `width` dimensions, hence the name atrous +convolution or convolution with holes (the French word trous means holes in +English). + +More specifically: + + output[b, i, j, k] = sum_{di, dj, q} filters[di, dj, q, k] * + value[b, i + rate * di, j + rate * dj, q] + +Atrous convolution allows us to explicitly control how densely to compute +feature responses in fully convolutional networks. Used in conjunction with +bilinear interpolation, it offers an alternative to `conv2d_transpose` in +dense prediction tasks such as semantic image segmentation, optical flow +computation, or depth estimation. It also allows us to effectively enlarge +the field of view of filters without increasing the number of parameters or +the amount of computation. + +For a description of atrous convolution and how it can be used for dense +feature extraction, please see: [Semantic Image Segmentation with Deep +Convolutional Nets and Fully Connected CRFs](http://arxiv.org/abs/1412.7062). +The same operation is investigated further in [Multi-Scale Context Aggregation +by Dilated Convolutions](http://arxiv.org/abs/1511.07122). Previous works +that effectively use atrous convolution in different ways are, among others, +[OverFeat: Integrated Recognition, Localization and Detection using +Convolutional Networks](http://arxiv.org/abs/1312.6229) and [Fast Image +Scanning with Deep Max-Pooling Convolutional Neural Networks] +(http://arxiv.org/abs/1302.1700). Atrous convolution is also closely related +to the so-called noble identities in multi-rate signal processing. + +There are many different ways to implement atrous convolution (see the refs +above). The implementation here reduces + + atrous_conv2d(value, filters, rate, padding=padding) + +to the following three operations: + + paddings = ... + net = space_to_batch(value, paddings, block_size=rate) + net = conv2d(net, filters, strides=[1, 1, 1, 1], padding="VALID") + crops = ... + net = batch_to_space(net, crops, block_size=rate) + +Advanced usage. Note the following optimization: A sequence of `atrous_conv2d` +operations with identical `rate` parameters, 'SAME' `padding`, and filters +with odd heights/ widths: + + net = atrous_conv2d(net, filters1, rate, padding="SAME") + net = atrous_conv2d(net, filters2, rate, padding="SAME") + ... + net = atrous_conv2d(net, filtersK, rate, padding="SAME") + +can be equivalently performed cheaper in terms of computation and memory as: + + pad = ... # padding so that the input dims are multiples of rate + net = space_to_batch(net, paddings=pad, block_size=rate) + net = conv2d(net, filters1, strides=[1, 1, 1, 1], padding="SAME") + net = conv2d(net, filters2, strides=[1, 1, 1, 1], padding="SAME") + ... + net = conv2d(net, filtersK, strides=[1, 1, 1, 1], padding="SAME") + net = batch_to_space(net, crops=pad, block_size=rate) + +because a pair of consecutive `space_to_batch` and `batch_to_space` ops with +the same `block_size` cancel out when their respective `paddings` and `crops` +inputs are identical. + +##### Args: + + +* `value`: A 4-D `Tensor` of type `float`. It needs to be in the default "NHWC" + format. Its shape is `[batch, in_height, in_width, in_channels]`. +* `filters`: A 4-D `Tensor` with the same type as `value` and shape + `[filter_height, filter_width, in_channels, out_channels]`. `filters`' + `in_channels` dimension must match that of `value`. Atrous convolution is + equivalent to standard convolution with upsampled filters with effective + height `filter_height + (filter_height - 1) * (rate - 1)` and effective + width `filter_width + (filter_width - 1) * (rate - 1)`, produced by + inserting `rate - 1` zeros along consecutive elements across the + `filters`' spatial dimensions. +* `rate`: A positive int32. The stride with which we sample input values across + the `height` and `width` dimensions. Equivalently, the rate by which we + upsample the filter values by inserting zeros across the `height` and + `width` dimensions. In the literature, the same parameter is sometimes + called `input stride` or `dilation`. +* `padding`: A string, either `'VALID'` or `'SAME'`. The padding algorithm. +* `name`: Optional name for the returned tensor. + +##### Returns: + + A `Tensor` with the same type as `value`. + +##### Raises: + + +* `ValueError`: If input/output depth does not match `filters`' shape, or if + padding is other than `'VALID'` or `'SAME'`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.conv2d_transpose.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.conv2d_transpose.md new file mode 100644 index 0000000000..ee459ae0e6 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.conv2d_transpose.md @@ -0,0 +1,34 @@ +### `tf.nn.conv2d_transpose(value, filter, output_shape, strides, padding='SAME', name=None)` {#conv2d_transpose} + +The transpose of `conv2d`. + +This operation is sometimes called "deconvolution" after [Deconvolutional +Networks](http://www.matthewzeiler.com/pubs/cvpr2010/cvpr2010.pdf), but is +actually the transpose (gradient) of `conv2d` rather than an actual +deconvolution. + +##### Args: + + +* `value`: A 4-D `Tensor` of type `float` and shape + `[batch, height, width, in_channels]`. +* `filter`: A 4-D `Tensor` with the same type as `value` and shape + `[height, width, output_channels, in_channels]`. `filter`'s + `in_channels` dimension must match that of `value`. +* `output_shape`: A 1-D `Tensor` representing the output shape of the + deconvolution op. +* `strides`: A list of ints. The stride of the sliding window for each + dimension of the input tensor. +* `padding`: A string, either `'VALID'` or `'SAME'`. The padding algorithm. +* `name`: Optional name for the returned tensor. + +##### Returns: + + A `Tensor` with the same type as `value`. + +##### Raises: + + +* `ValueError`: If input/output depth does not match `filter`'s shape, or if + padding is other than `'VALID'` or `'SAME'`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.dropout.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.dropout.md new file mode 100644 index 0000000000..4f2b7c0214 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.dropout.md @@ -0,0 +1,38 @@ +### `tf.nn.dropout(x, keep_prob, noise_shape=None, seed=None, name=None)` {#dropout} + +Computes dropout. + +With probability `keep_prob`, outputs the input element scaled up by +`1 / keep_prob`, otherwise outputs `0`. The scaling is so that the expected +sum is unchanged. + +By default, each element is kept or dropped independently. If `noise_shape` +is specified, it must be +[broadcastable](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) +to the shape of `x`, and only dimensions with `noise_shape[i] == shape(x)[i]` +will make independent decisions. For example, if `shape(x) = [k, l, m, n]` +and `noise_shape = [k, 1, 1, n]`, each batch and channel component will be +kept independently and each row and column will be kept or not kept together. + +##### Args: + + +* `x`: A tensor. +* `keep_prob`: A scalar `Tensor` with the same type as x. The probability + that each element is kept. +* `noise_shape`: A 1-D `Tensor` of type `int32`, representing the + shape for randomly generated keep/drop flags. +* `seed`: A Python integer. Used to create random seeds. See + [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) + for behavior. +* `name`: A name for this operation (optional). + +##### Returns: + + A Tensor of the same shape of `x`. + +##### Raises: + + +* `ValueError`: If `keep_prob` is not in `(0, 1]`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.l2_loss.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.l2_loss.md deleted file mode 100644 index fd648ca642..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.l2_loss.md +++ /dev/null @@ -1,19 +0,0 @@ -### `tf.nn.l2_loss(t, name=None)` {#l2_loss} - -L2 Loss. - -Computes half the L2 norm of a tensor without the `sqrt`: - - output = sum(t ** 2) / 2 - -##### Args: - - -* `t`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`. - Typically 2-D, but may have any dimensions. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `t`. 0-D. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.max_pool_with_argmax.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.max_pool_with_argmax.md deleted file mode 100644 index 0bf84c16d0..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.max_pool_with_argmax.md +++ /dev/null @@ -1,30 +0,0 @@ -### `tf.nn.max_pool_with_argmax(input, ksize, strides, padding, Targmax=None, name=None)` {#max_pool_with_argmax} - -Performs max pooling on the input and outputs both max values and indices. - -The indices in `argmax` are flattened, so that a maximum value at position -`[b, y, x, c]` becomes flattened index -`((b * height + y) * width + x) * channels + c`. - -##### Args: - - -* `input`: A `Tensor` of type `float32`. - 4-D with shape `[batch, height, width, channels]`. Input to pool over. -* `ksize`: A list of `ints` that has length `>= 4`. - The size of the window for each dimension of the input tensor. -* `strides`: A list of `ints` that has length `>= 4`. - The stride of the sliding window for each dimension of the - input tensor. -* `padding`: A `string` from: `"SAME", "VALID"`. - The type of padding algorithm to use. -* `Targmax`: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int64`. -* `name`: A name for the operation (optional). - -##### Returns: - - A tuple of `Tensor` objects (output, argmax). - -* `output`: A `Tensor` of type `float32`. The max pooled output tensor. -* `argmax`: A `Tensor` of type `Targmax`. 4-D. The flattened indices of the max values chosen for each output. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.sigmoid_cross_entropy_with_logits.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.sigmoid_cross_entropy_with_logits.md new file mode 100644 index 0000000000..c449554fb8 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.sigmoid_cross_entropy_with_logits.md @@ -0,0 +1,48 @@ +### `tf.nn.sigmoid_cross_entropy_with_logits(logits, targets, name=None)` {#sigmoid_cross_entropy_with_logits} + +Computes sigmoid cross entropy given `logits`. + +Measures the probability error in discrete classification tasks in which each +class is independent and not mutually exclusive. For instance, one could +perform multilabel classification where a picture can contain both an elephant +and a dog at the same time. + +For brevity, let `x = logits`, `z = targets`. The logistic loss is + + z * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x)) + = z * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x))) + = z * log(1 + exp(-x)) + (1 - z) * (-log(exp(-x)) + log(1 + exp(-x))) + = z * log(1 + exp(-x)) + (1 - z) * (x + log(1 + exp(-x)) + = (1 - z) * x + log(1 + exp(-x)) + = x - x * z + log(1 + exp(-x)) + +For x < 0, to avoid overflow in exp(-x), we reformulate the above + + x - x * z + log(1 + exp(-x)) + = log(exp(x)) - x * z + log(1 + exp(-x)) + = - x * z + log(1 + exp(x)) + +Hence, to ensure stability and avoid overflow, the implementation uses this +equivalent formulation + + max(x, 0) - x * z + log(1 + exp(-abs(x))) + +`logits` and `targets` must have the same type and shape. + +##### Args: + + +* `logits`: A `Tensor` of type `float32` or `float64`. +* `targets`: A `Tensor` of the same type and shape as `logits`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of the same shape as `logits` with the componentwise + logistic losses. + +##### Raises: + + +* `ValueError`: If `logits` and `targets` do not have the same shape. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.softmax_cross_entropy_with_logits.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.softmax_cross_entropy_with_logits.md new file mode 100644 index 0000000000..d6054c49ac --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.softmax_cross_entropy_with_logits.md @@ -0,0 +1,36 @@ +### `tf.nn.softmax_cross_entropy_with_logits(logits, labels, name=None)` {#softmax_cross_entropy_with_logits} + +Computes softmax cross entropy between `logits` and `labels`. + +Measures the probability error in discrete classification tasks in which the +classes are mutually exclusive (each entry is in exactly one class). For +example, each CIFAR-10 image is labeled with one and only one label: an image +can be a dog or a truck, but not both. + +**NOTE:** While the classes are mutually exclusive, their probabilities +need not be. All that is required is that each row of `labels` is +a valid probability distribution. If they are not, the computation of the +gradient will be incorrect. + +If using exclusive `labels` (wherein one and only +one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`. + +**WARNING:** This op expects unscaled logits, since it performs a `softmax` +on `logits` internally for efficiency. Do not call this op with the +output of `softmax`, as it will produce incorrect results. + +`logits` and `labels` must have the same shape `[batch_size, num_classes]` +and the same dtype (either `float32` or `float64`). + +##### Args: + + +* `logits`: Unscaled log probabilities. +* `labels`: Each row `labels[i]` must be a valid probability distribution. +* `name`: A name for the operation (optional). + +##### Returns: + + A 1-D `Tensor` of length `batch_size` of the same type as `logits` with the + softmax cross entropy loss. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.sparse_softmax_cross_entropy_with_logits.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.sparse_softmax_cross_entropy_with_logits.md new file mode 100644 index 0000000000..6d53d84c5b --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.sparse_softmax_cross_entropy_with_logits.md @@ -0,0 +1,38 @@ +### `tf.nn.sparse_softmax_cross_entropy_with_logits(logits, labels, name=None)` {#sparse_softmax_cross_entropy_with_logits} + +Computes sparse softmax cross entropy between `logits` and `labels`. + +Measures the probability error in discrete classification tasks in which the +classes are mutually exclusive (each entry is in exactly one class). For +example, each CIFAR-10 image is labeled with one and only one label: an image +can be a dog or a truck, but not both. + +**NOTE:** For this operation, the probability of a given label is considered +exclusive. That is, soft classes are not allowed, and the `labels` vector +must provide a single specific index for the true class for each row of +`logits` (each minibatch entry). For soft softmax classification with +a probability distribution for each entry, see +`softmax_cross_entropy_with_logits`. + +**WARNING:** This op expects unscaled logits, since it performs a softmax +on `logits` internally for efficiency. Do not call this op with the +output of `softmax`, as it will produce incorrect results. + +`logits` must have the shape `[batch_size, num_classes]` +and dtype `float32` or `float64`. + +`labels` must have the shape `[batch_size]` and dtype `int32` or `int64`. + +##### Args: + + +* `logits`: Unscaled log probabilities. +* `labels`: Each entry `labels[i]` must be an index in `[0, num_classes)`. Other + values will result in a loss of 0, but incorrect gradient computations. +* `name`: A name for the operation (optional). + +##### Returns: + + A 1-D `Tensor` of length `batch_size` of the same type as `logits` with the + softmax cross entropy loss. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.weighted_cross_entropy_with_logits.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.weighted_cross_entropy_with_logits.md new file mode 100644 index 0000000000..697de67936 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.weighted_cross_entropy_with_logits.md @@ -0,0 +1,52 @@ +### `tf.nn.weighted_cross_entropy_with_logits(logits, targets, pos_weight, name=None)` {#weighted_cross_entropy_with_logits} + +Computes a weighted cross entropy. + +This is like `sigmoid_cross_entropy_with_logits()` except that `pos_weight`, +allows one to trade off recall and precision by up- or down-weighting the +cost of a positive error relative to a negative error. + +The usual cross-entropy cost is defined as: + + targets * -log(sigmoid(logits)) + (1 - targets) * -log(1 - sigmoid(logits)) + +The argument `pos_weight` is used as a multiplier for the positive targets: + + targets * -log(sigmoid(logits)) * pos_weight + + (1 - targets) * -log(1 - sigmoid(logits)) + +For brevity, let `x = logits`, `z = targets`, `q = pos_weight`. +The loss is: + + qz * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x)) + = qz * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x))) + = qz * log(1 + exp(-x)) + (1 - z) * (-log(exp(-x)) + log(1 + exp(-x))) + = qz * log(1 + exp(-x)) + (1 - z) * (x + log(1 + exp(-x)) + = (1 - z) * x + (qz + 1 - z) * log(1 + exp(-x)) + = (1 - z) * x + (1 + (q - 1) * z) * log(1 + exp(-x)) + +Setting `l = (1 + (q - 1) * z)`, to ensure stability and avoid overflow, +the implementation uses + + (1 - z) * x + l * (log(1 + exp(-abs(x))) + max(-x, 0)) + +`logits` and `targets` must have the same type and shape. + +##### Args: + + +* `logits`: A `Tensor` of type `float32` or `float64`. +* `targets`: A `Tensor` of the same type and shape as `logits`. +* `pos_weight`: A coefficient to use on the positive examples. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of the same shape as `logits` with the componentwise + weightedlogistic losses. + +##### Raises: + + +* `ValueError`: If `logits` and `targets` do not have the same shape. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.no_regularizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.no_regularizer.md deleted file mode 100644 index cb55675641..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.no_regularizer.md +++ /dev/null @@ -1,4 +0,0 @@ -### `tf.no_regularizer(_)` {#no_regularizer} - -Use this function to prevent regularization of variables. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.pack.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.pack.md new file mode 100644 index 0000000000..75a5fbe15c --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.pack.md @@ -0,0 +1,23 @@ +### `tf.pack(values, name='pack')` {#pack} + +Packs a list of rank-`R` tensors into one rank-`(R+1)` tensor. + +Packs tensors in `values` into a tensor with rank one higher than each tensor +in `values` and shape `[len(values)] + values[0].shape`. The output satisfies +`output[i, ...] = values[i][...]`. + +This is the opposite of unpack. The numpy equivalent is + + tf.pack([x, y, z]) = np.asarray([x, y, z]) + +##### Args: + + +* `values`: A list of `Tensor` objects with the same shape and type. +* `name`: A name for this operation (optional). + +##### Returns: + + +* `output`: A packed `Tensor` with the same type as `values`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.placeholder.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.placeholder.md deleted file mode 100644 index 28cdc11cce..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.placeholder.md +++ /dev/null @@ -1,34 +0,0 @@ -### `tf.placeholder(dtype, shape=None, name=None)` {#placeholder} - -Inserts a placeholder for a tensor that will be always fed. - -**Important**: This tensor will produce an error if evaluated. Its value must -be fed using the `feed_dict` optional argument to `Session.run()`, -`Tensor.eval()`, or `Operation.run()`. - -For example: - -```python -x = tf.placeholder(tf.float32, shape=(1024, 1024)) -y = tf.matmul(x, x) - -with tf.Session() as sess: - print(sess.run(y)) # ERROR: will fail because x was not fed. - - rand_array = np.random.rand(1024, 1024) - print(sess.run(y, feed_dict={x: rand_array})) # Will succeed. -``` - -##### Args: - - -* `dtype`: The type of elements in the tensor to be fed. -* `shape`: The shape of the tensor to be fed (optional). If the shape is not - specified, you can feed a tensor of any shape. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` that may be used as a handle for feeding a value, but not - evaluated directly. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.pow.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.pow.md new file mode 100644 index 0000000000..8588b72fb8 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.pow.md @@ -0,0 +1,24 @@ +### `tf.pow(x, y, name=None)` {#pow} + +Computes the power of one value to another. + +Given a tensor `x` and a tensor `y`, this operation computes \\(x^y\\) for +corresponding elements in `x` and `y`. For example: + +``` +# tensor 'x' is [[2, 2], [3, 3]] +# tensor 'y' is [[8, 16], [2, 3]] +tf.pow(x, y) ==> [[256, 65536], [9, 27]] +``` + +##### Args: + + +* `x`: A `Tensor` of type `float`, `double`, `int32`, `complex64`, or `int64`. +* `y`: A `Tensor` of type `float`, `double`, `int32`, `complex64`, or `int64`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.py_func.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.py_func.md new file mode 100644 index 0000000000..c115d21781 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.py_func.md @@ -0,0 +1,31 @@ +### `tf.py_func(func, inp, Tout, name=None)` {#py_func} + +Wraps a python function and uses it as a tensorflow op. + +Given a python function `func`, which takes numpy arrays as its +inputs and returns numpy arrays as its outputs. E.g., + +```python +def my_func(x): + # x will be a numpy array with the contents of the placeholder below + return np.sinh(x) +inp = tf.placeholder(tf.float32, [...]) +y = py_func(my_func, [inp], [tf.float32]) +``` + +The above snippet constructs a tf graph which invokes a numpy +sinh(x) as an op in the graph. + +##### Args: + + +* `func`: A python function. +* `inp`: A list of `Tensor`. +* `Tout`: A list of tensorflow data types indicating what `func` + returns. +* `name`: A name for the operation (optional). + +##### Returns: + + A list of `Tensor` which `func` computes. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.random_uniform.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.random_uniform.md deleted file mode 100644 index 517bdd98c4..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.random_uniform.md +++ /dev/null @@ -1,41 +0,0 @@ -### `tf.random_uniform(shape, minval=0, maxval=None, dtype=tf.float32, seed=None, name=None)` {#random_uniform} - -Outputs random values from a uniform distribution. - -The generated values follow a uniform distribution in the range -`[minval, maxval)`. The lower bound `minval` is included in the range, while -the upper bound `maxval` is excluded. - -For floats, the default range is `[0, 1)`. For ints, at least `maxval` must -be specified explicitly. - -In the integer case, the random integers are slightly biased unless -`maxval - minval` is an exact power of two. The bias is small for values of -`maxval - minval` significantly smaller than the range of the output (either -`2**32` or `2**64`). - -##### Args: - - -* `shape`: A 1-D integer Tensor or Python array. The shape of the output tensor. -* `minval`: A 0-D Tensor or Python value of type `dtype`. The lower bound on the - range of random values to generate. Defaults to 0. -* `maxval`: A 0-D Tensor or Python value of type `dtype`. The upper bound on - the range of random values to generate. Defaults to 1 if `dtype` is - floating point. -* `dtype`: The type of the output: `float32`, `float64`, `int32`, or `int64`. -* `seed`: A Python integer. Used to create a random seed for the distribution. - See - [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) - for behavior. -* `name`: A name for the operation (optional). - -##### Returns: - - A tensor of the specified shape filled with random uniform values. - -##### Raises: - - -* `ValueError`: If `dtype` is integral and `maxval` is not specified. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.rank.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.rank.md deleted file mode 100644 index 8d8fdb4af4..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.rank.md +++ /dev/null @@ -1,28 +0,0 @@ -### `tf.rank(input, name=None)` {#rank} - -Returns the rank of a tensor. - -This operation returns an integer representing the rank of `input`. - -For example: - -```prettyprint -# 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]] -# shape of tensor 't' is [2, 2, 3] -rank(t) ==> 3 -``` - -**Note**: The rank of a tensor is not the same as the rank of a matrix. The rank -of a tensor is the number of indices required to uniquely select each element -of the tensor. Rank is also known as "order", "degree", or "ndims." - -##### Args: - - -* `input`: A `Tensor`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `int32`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.real.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.real.md deleted file mode 100644 index 3be066f588..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.real.md +++ /dev/null @@ -1,28 +0,0 @@ -### `tf.real(input, name=None)` {#real} - -Returns the real part of a complex number. - -Given a tensor `input` of complex numbers, this operation returns a tensor of -type `float` or `double` that is the real part of each element in `input`. -All elements in `input` must be complex numbers of the form \(a + bj\), -where *a* is the real part returned by this operation and *b* is the -imaginary part. - -For example: - -``` -# tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j] -tf.real(input) ==> [-2.25, 3.25] -``` - -##### Args: - - -* `input`: A `Tensor`. Must be one of the following types: `complex64`, - `complex128`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `float` or `double`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.reduce_prod.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.reduce_prod.md new file mode 100644 index 0000000000..a87daa33fb --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.reduce_prod.md @@ -0,0 +1,25 @@ +### `tf.reduce_prod(input_tensor, reduction_indices=None, keep_dims=False, name=None)` {#reduce_prod} + +Computes the product of elements across dimensions of a tensor. + +Reduces `input_tensor` along the dimensions given in `reduction_indices`. +Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each +entry in `reduction_indices`. If `keep_dims` is true, the reduced dimensions +are retained with length 1. + +If `reduction_indices` has no entries, all dimensions are reduced, and a +tensor with a single element is returned. + +##### Args: + + +* `input_tensor`: The tensor to reduce. Should have numeric type. +* `reduction_indices`: The dimensions to reduce. If `None` (the default), + reduces all dimensions. +* `keep_dims`: If true, retains reduced dimensions with length 1. +* `name`: A name for the operation (optional). + +##### Returns: + + The reduced tensor. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.report_uninitialized_variables.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.report_uninitialized_variables.md deleted file mode 100644 index 35536e65d9..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.report_uninitialized_variables.md +++ /dev/null @@ -1,19 +0,0 @@ -### `tf.report_uninitialized_variables(var_list=None, name='report_uninitialized_variables')` {#report_uninitialized_variables} - -Adds ops to list the names of uninitialized variables. - -When run, it returns a 1-D tensor containing the names of uninitialized -variables if there are any, or an empty array if there are none. - -##### Args: - - -* `var_list`: List of `Variable` objects to check. Defaults to the - value of `all_variables() + local_variables()` -* `name`: Optional name of the `Operation`. - -##### Returns: - - A 1-D tensor containing names of the unintialized variables, or an empty 1-D - tensor if there are no variables or no uninitialized variables. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.reverse.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.reverse.md new file mode 100644 index 0000000000..e316d5faae --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.reverse.md @@ -0,0 +1,61 @@ +### `tf.reverse(tensor, dims, name=None)` {#reverse} + +Reverses specific dimensions of a tensor. + +Given a `tensor`, and a `bool` tensor `dims` representing the dimensions +of `tensor`, this operation reverses each dimension i of `tensor` where +`dims[i]` is `True`. + +`tensor` can have up to 8 dimensions. The number of dimensions +of `tensor` must equal the number of elements in `dims`. In other words: + +`rank(tensor) = size(dims)` + +For example: + +```prettyprint +# tensor 't' is [[[[ 0, 1, 2, 3], +# [ 4, 5, 6, 7], +# [ 8, 9, 10, 11]], +# [[12, 13, 14, 15], +# [16, 17, 18, 19], +# [20, 21, 22, 23]]]] +# tensor 't' shape is [1, 2, 3, 4] + +# 'dims' is [False, False, False, True] +reverse(t, dims) ==> [[[[ 3, 2, 1, 0], + [ 7, 6, 5, 4], + [ 11, 10, 9, 8]], + [[15, 14, 13, 12], + [19, 18, 17, 16], + [23, 22, 21, 20]]]] + +# 'dims' is [False, True, False, False] +reverse(t, dims) ==> [[[[12, 13, 14, 15], + [16, 17, 18, 19], + [20, 21, 22, 23] + [[ 0, 1, 2, 3], + [ 4, 5, 6, 7], + [ 8, 9, 10, 11]]]] + +# 'dims' is [False, False, True, False] +reverse(t, dims) ==> [[[[8, 9, 10, 11], + [4, 5, 6, 7], + [0, 1, 2, 3]] + [[20, 21, 22, 23], + [16, 17, 18, 19], + [12, 13, 14, 15]]]] +``` + +##### Args: + + +* `tensor`: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int32`, `bool`, `float32`, `float64`. + Up to 8-D. +* `dims`: A `Tensor` of type `bool`. 1-D. The dimensions to reverse. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `tensor`. The same shape as `tensor`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.scan.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.scan.md new file mode 100644 index 0000000000..6ea0ac677b --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.scan.md @@ -0,0 +1,44 @@ +### `tf.scan(fn, elems, initializer=None, parallel_iterations=10, back_prop=True, swap_memory=False, name=None)` {#scan} + +scan on the list of tensors unpacked from `elems` on dimension 0. + +This scan operator repeatedly applies the callable `fn` to a sequence +of elements from first to last. The elements are made of the tensors +unpacked from `elems` on dimension 0. The callable fn takes two tensors as +arguments. The first argument is the accumulated value computed from the +preceding invocation of fn. If `initializer` is None, `elems` must contain +at least one element, and its first element is used as the initializer. + +Suppose that `elems` is unpacked into `values`, a list of tensors. The shape +of the result tensor is `[len(values)] + fn(initializer, values[0]).shape`. + +##### Args: + + +* `fn`: The callable to be performed. +* `elems`: A tensor to be unpacked on dimension 0. +* `initializer`: (optional) The initial value for the accumulator. +* `parallel_iterations`: (optional) The number of iterations allowed to run + in parallel. +* `back_prop`: (optional) True enables back propagation. +* `swap_memory`: (optional) True enables GPU-CPU memory swapping. +* `name`: (optional) Name prefix for the returned tensors. + +##### Returns: + + A tensor that packs the results of applying `fn` to the list of tensors + unpacked from `elems`, from first to last. + +##### Raises: + + +* `TypeError`: if `fn` is not callable. + +##### Example: + + ```python + elems = [1, 2, 3, 4, 5, 6] + sum = scan(lambda a, x: a + x, elems) + # sum == [1, 3, 6, 10, 15, 21] + ``` + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.segment_prod.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.segment_prod.md new file mode 100644 index 0000000000..c9ed2759cf --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.segment_prod.md @@ -0,0 +1,31 @@ +### `tf.segment_prod(data, segment_ids, name=None)` {#segment_prod} + +Computes the product along segments of a tensor. + +Read [the section on +Segmentation](../../api_docs/python/math_ops.md#segmentation) for an explanation +of segments. + +Computes a tensor such that +\\(output_i = \prod_j data_j\\) where the product is over `j` such +that `segment_ids[j] == i`. + +
+ +
+ +##### Args: + + +* `data`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`. +* `segment_ids`: A `Tensor`. Must be one of the following types: `int32`, `int64`. + A 1-D tensor whose rank is equal to the rank of `data`'s + first dimension. Values should be sorted and can be repeated. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `data`. + Has same shape as data, except for dimension 0 which + has size `k`, the number of segments. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.self_adjoint_eig.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.self_adjoint_eig.md deleted file mode 100644 index efbc0cd3be..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.self_adjoint_eig.md +++ /dev/null @@ -1,21 +0,0 @@ -### `tf.self_adjoint_eig(input, name=None)` {#self_adjoint_eig} - -Calculates the Eigen Decomposition of a square Self-Adjoint matrix. - -Only the lower-triangular part of the input will be used in this case. The -upper-triangular part will not be read. - -The result is a M+1 x M matrix whose first row is the eigenvalues, and -subsequent rows are eigenvectors. - -##### Args: - - -* `input`: A `Tensor`. Must be one of the following types: `float64`, `float32`. - Shape is `[M, M]`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `input`. Shape is `[M+1, M]`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.shape.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.shape.md deleted file mode 100644 index 4262f41a3d..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.shape.md +++ /dev/null @@ -1,23 +0,0 @@ -### `tf.shape(input, name=None)` {#shape} - -Returns the shape of a tensor. - -This operation returns a 1-D integer tensor representing the shape of `input`. - -For example: - -```prettyprint -# 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]] -shape(t) ==> [2, 2, 3] -``` - -##### Args: - - -* `input`: A `Tensor`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `int32`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.shape_n.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.shape_n.md new file mode 100644 index 0000000000..a229253406 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.shape_n.md @@ -0,0 +1,16 @@ +### `tf.shape_n(input, name=None)` {#shape_n} + +Returns shape of tensors. + +This operation returns N 1-D integer tensors representing shape of `input[i]s`. + +##### Args: + + +* `input`: A list of at least 1 `Tensor` objects of the same type. +* `name`: A name for the operation (optional). + +##### Returns: + + A list with the same number of `Tensor` objects as `input` of `Tensor` objects of type `int32`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.squared_difference.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.squared_difference.md new file mode 100644 index 0000000000..d6bb175669 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.squared_difference.md @@ -0,0 +1,15 @@ +### `tf.squared_difference(x, y, name=None)` {#squared_difference} + +Returns (x - y)(x - y) element-wise. + +##### Args: + + +* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +* `y`: A `Tensor`. Must have the same type as `x`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `x`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.string_to_hash_bucket.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.string_to_hash_bucket.md deleted file mode 100644 index 941d50e139..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.string_to_hash_bucket.md +++ /dev/null @@ -1,21 +0,0 @@ -### `tf.string_to_hash_bucket(string_tensor, num_buckets, name=None)` {#string_to_hash_bucket} - -Converts each string in the input Tensor to its hash mod by a number of buckets. - -The hash function is deterministic on the content of the string within the -process. - -Note that the hash function may change from time to time. - -##### Args: - - -* `string_tensor`: A `Tensor` of type `string`. -* `num_buckets`: An `int` that is `>= 1`. The number of buckets. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `int64`. - A Tensor of the same shape as the input `string_tensor`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.sub.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.sub.md new file mode 100644 index 0000000000..2d1da0f0b9 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.sub.md @@ -0,0 +1,15 @@ +### `tf.sub(x, y, name=None)` {#sub} + +Returns x - y element-wise. + +##### Args: + + +* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +* `y`: A `Tensor`. Must have the same type as `x`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `x`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.test.get_temp_dir.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.test.get_temp_dir.md deleted file mode 100644 index e36d6163a7..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.test.get_temp_dir.md +++ /dev/null @@ -1,10 +0,0 @@ -### `tf.test.get_temp_dir()` {#get_temp_dir} - -Returns a temporary directory for use during tests. - -There is no need to delete the directory after the test. - -##### Returns: - - The temporary directory. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.test.is_built_with_cuda.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.test.is_built_with_cuda.md new file mode 100644 index 0000000000..51e3d97d8c --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.test.is_built_with_cuda.md @@ -0,0 +1,4 @@ +### `tf.test.is_built_with_cuda()` {#is_built_with_cuda} + +Returns whether TensorFlow was built with CUDA (GPU) support. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.tile.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.tile.md deleted file mode 100644 index 650f1f7eb8..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.tile.md +++ /dev/null @@ -1,22 +0,0 @@ -### `tf.tile(input, multiples, name=None)` {#tile} - -Constructs a tensor by tiling a given tensor. - -This operation creates a new tensor by replicating `input` `multiples` times. -The output tensor's i'th dimension has `input.dims(i) * multiples[i]` elements, -and the values of `input` are replicated `multiples[i]` times along the 'i'th -dimension. For example, tiling `[a b c d]` by `[2]` produces -`[a b c d a b c d]`. - -##### Args: - - -* `input`: A `Tensor`. 1-D or higher. -* `multiples`: A `Tensor` of type `int32`. - 1-D. Length must be the same as the number of dimensions in `input` -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `input`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.to_int64.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.to_int64.md new file mode 100644 index 0000000000..0762822b3d --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.to_int64.md @@ -0,0 +1,19 @@ +### `tf.to_int64(x, name='ToInt64')` {#to_int64} + +Casts a tensor to type `int64`. + +##### Args: + + +* `x`: A `Tensor` or `SparseTensor`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` or `SparseTensor` with same shape as `x` with type `int64`. + +##### Raises: + + +* `TypeError`: If `x` cannot be cast to the `int64`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.AdadeltaOptimizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.AdadeltaOptimizer.md deleted file mode 100644 index 9a14c50dc8..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.AdadeltaOptimizer.md +++ /dev/null @@ -1,23 +0,0 @@ -Optimizer that implements the Adadelta algorithm. - -See [M. D. Zeiler](http://arxiv.org/abs/1212.5701) -([pdf](http://arxiv.org/pdf/1212.5701v1.pdf)) - -- - - - -#### `tf.train.AdadeltaOptimizer.__init__(learning_rate=0.001, rho=0.95, epsilon=1e-08, use_locking=False, name='Adadelta')` {#AdadeltaOptimizer.__init__} - -Construct a new Adadelta optimizer. - -##### Args: - - -* `learning_rate`: A `Tensor` or a floating point value. The learning rate. -* `rho`: A `Tensor` or a floating point value. The decay rate. -* `epsilon`: A `Tensor` or a floating point value. A constant epsilon used - to better conditioning the grad update. -* `use_locking`: If `True` use locks for update operations. -* `name`: Optional name prefix for the operations created when applying - gradients. Defaults to "Adadelta". - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.AdagradOptimizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.AdagradOptimizer.md new file mode 100644 index 0000000000..35e416386e --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.AdagradOptimizer.md @@ -0,0 +1,26 @@ +Optimizer that implements the Adagrad algorithm. + +See this [paper](http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf). + +- - - + +#### `tf.train.AdagradOptimizer.__init__(learning_rate, initial_accumulator_value=0.1, use_locking=False, name='Adagrad')` {#AdagradOptimizer.__init__} + +Construct a new Adagrad optimizer. + +##### Args: + + +* `learning_rate`: A `Tensor` or a floating point value. The learning rate. +* `initial_accumulator_value`: A floating point value. + Starting value for the accumulators, must be positive. +* `use_locking`: If `True` use locks for update operations. +* `name`: Optional name prefix for the operations created when applying + gradients. Defaults to "Adagrad". + +##### Raises: + + +* `ValueError`: If the `initial_accumulator_value` is invalid. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.GradientDescentOptimizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.GradientDescentOptimizer.md new file mode 100644 index 0000000000..99a5f1f0b1 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.GradientDescentOptimizer.md @@ -0,0 +1,18 @@ +Optimizer that implements the gradient descent algorithm. + +- - - + +#### `tf.train.GradientDescentOptimizer.__init__(learning_rate, use_locking=False, name='GradientDescent')` {#GradientDescentOptimizer.__init__} + +Construct a new gradient descent optimizer. + +##### Args: + + +* `learning_rate`: A Tensor or a floating point value. The learning + rate to use. +* `use_locking`: If True use locks for update operations. +* `name`: Optional name prefix for the operations created when applying + gradients. Defaults to "GradientDescent". + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.LooperThread.loop.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.LooperThread.loop.md deleted file mode 100644 index 6665ca7369..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.LooperThread.loop.md +++ /dev/null @@ -1,22 +0,0 @@ -#### `tf.train.LooperThread.loop(coord, timer_interval_secs, target, args=None, kwargs=None)` {#LooperThread.loop} - -Start a LooperThread that calls a function periodically. - -If `timer_interval_secs` is None the thread calls `target(args)` -repeatedly. Otherwise `target(args)` is called every `timer_interval_secs` -seconds. The thread terminates when a stop of the coordinator is -requested. - -##### Args: - - -* `coord`: A Coordinator. -* `timer_interval_secs`: Number. Time boundaries at which to call `target`. -* `target`: A callable object. -* `args`: Optional arguments to pass to `target` when calling it. -* `kwargs`: Optional keyword arguments to pass to `target` when calling it. - -##### Returns: - - The started thread. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.global_step.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.global_step.md new file mode 100644 index 0000000000..a53175be6a --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.global_step.md @@ -0,0 +1,27 @@ +### `tf.train.global_step(sess, global_step_tensor)` {#global_step} + +Small helper to get the global step. + +```python +# Creates a variable to hold the global_step. +global_step_tensor = tf.Variable(10, trainable=False, name='global_step') +# Creates a session. +sess = tf.Session() +# Initializes the variable. +sess.run(global_step_tensor.initializer) +print('global_step: %s' % tf.train.global_step(sess, global_step_tensor)) + +global_step: 10 +``` + +##### Args: + + +* `sess`: A TensorFlow `Session` object. +* `global_step_tensor`: `Tensor` or the `name` of the operation that contains + the global step. + +##### Returns: + + The global step value. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.match_filenames_once.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.match_filenames_once.md deleted file mode 100644 index 6c84221cc5..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.match_filenames_once.md +++ /dev/null @@ -1,14 +0,0 @@ -### `tf.train.match_filenames_once(pattern, name=None)` {#match_filenames_once} - -Save the list of files matching pattern, so it is only computed once. - -##### Args: - - -* `pattern`: A file pattern (glob). -* `name`: A name for the operations (optional). - -##### Returns: - - A variable that is initialized to the list of files matching pattern. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.replica_device_setter.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.replica_device_setter.md deleted file mode 100644 index a5ea200562..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.replica_device_setter.md +++ /dev/null @@ -1,50 +0,0 @@ -### `tf.train.replica_device_setter(ps_tasks=0, ps_device='/job:ps', worker_device='/job:worker', merge_devices=True, cluster=None, ps_ops=None)` {#replica_device_setter} - -Return a `device function` to use when building a Graph for replicas. - -Device Functions are used in `with tf.device(device_function):` statement to -automatically assign devices to `Operation` objects as they are constructed, -Device constraints are added from the inner-most context first, working -outwards. The merging behavior adds constraints to fields that are yet unset -by a more inner context. Currently the fields are (job, task, cpu/gpu). - -If `cluster` is `None`, and `ps_tasks` is 0, the returned function is a no-op. - -For example, - -```python -# To build a cluster with two ps jobs on hosts ps0 and ps1, and 3 worker -# jobs on hosts worker0, worker1 and worker2. -cluster_spec = { - "ps": ["ps0:2222", "ps1:2222"], - "worker": ["worker0:2222", "worker1:2222", "worker2:2222"]} -with tf.device(tf.replica_device_setter(cluster=cluster_spec)): - # Build your graph - v1 = tf.Variable(...) # assigned to /job:ps/task:0 - v2 = tf.Variable(...) # assigned to /job:ps/task:1 - v3 = tf.Variable(...) # assigned to /job:ps/task:0 -# Run compute -``` - -##### Args: - - -* `ps_tasks`: Number of tasks in the `ps` job. -* `ps_device`: String. Device of the `ps` job. If empty no `ps` job is used. - Defaults to `ps`. -* `worker_device`: String. Device of the `worker` job. If empty no `worker` - job is used. -* `merge_devices`: `Boolean`. If `True`, merges or only sets a device if the - device constraint is completely unset. merges device specification rather - than overriding them. -* `cluster`: `ClusterDef` proto or `ClusterSpec`. -* `ps_ops`: List of `Operation` objects that need to be placed on `ps` devices. - -##### Returns: - - A function to pass to `tf.device()`. - -##### Raises: - - TypeError if `cluster` is not a dictionary or `ClusterDef` protocol buffer. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.slice_input_producer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.slice_input_producer.md deleted file mode 100644 index da888d0fc2..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.slice_input_producer.md +++ /dev/null @@ -1,35 +0,0 @@ -### `tf.train.slice_input_producer(tensor_list, num_epochs=None, shuffle=True, seed=None, capacity=32, shared_name=None, name=None)` {#slice_input_producer} - -Produces a slice of each `Tensor` in `tensor_list`. - -Implemented using a Queue -- a `QueueRunner` for the Queue -is added to the current `Graph`'s `QUEUE_RUNNER` collection. - -##### Args: - - -* `tensor_list`: A list of `Tensor` objects. Every `Tensor` in - `tensor_list` must have the same size in the first dimension. -* `num_epochs`: An integer (optional). If specified, `slice_input_producer` - produces each slice `num_epochs` times before generating - an `OutOfRange` error. If not specified, `slice_input_producer` can cycle - through the slices an unlimited number of times. -* `shuffle`: Boolean. If true, the integers are randomly shuffled within each - epoch. -* `seed`: An integer (optional). Seed used if shuffle == True. -* `capacity`: An integer. Sets the queue capacity. -* `shared_name`: (optional). If set, this queue will be shared under the given - name across multiple sessions. -* `name`: A name for the operations (optional). - -##### Returns: - - A list of tensors, one for each element of `tensor_list`. If the tensor - in `tensor_list` has shape `[N, a, b, .., z]`, then the corresponding output - tensor will have shape `[a, b, ..., z]`. - -##### Raises: - - -* `ValueError`: if `slice_input_producer` produces nothing from `tensor_list`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.start_queue_runners.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.start_queue_runners.md new file mode 100644 index 0000000000..21ac6efee8 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.start_queue_runners.md @@ -0,0 +1,24 @@ +### `tf.train.start_queue_runners(sess=None, coord=None, daemon=True, start=True, collection='queue_runners')` {#start_queue_runners} + +Starts all queue runners collected in the graph. + +This is a companion method to `add_queue_runner()`. It just starts +threads for all queue runners collected in the graph. It returns +the list of all threads. + +##### Args: + + +* `sess`: `Session` used to run the queue ops. Defaults to the + default session. +* `coord`: Optional `Coordinator` for coordinating the started threads. +* `daemon`: Whether the threads should be marked as `daemons`, meaning + they don't block program exit. +* `start`: Set to `False` to only create the threads, not start them. +* `collection`: A `GraphKey` specifying the graph collection to + get the queue runners from. Defaults to `GraphKeys.QUEUE_RUNNERS`. + +##### Returns: + + A list of threads. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.string_input_producer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.string_input_producer.md deleted file mode 100644 index 5ca2a4cb86..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.string_input_producer.md +++ /dev/null @@ -1,32 +0,0 @@ -### `tf.train.string_input_producer(string_tensor, num_epochs=None, shuffle=True, seed=None, capacity=32, shared_name=None, name=None)` {#string_input_producer} - -Output strings (e.g. filenames) to a queue for an input pipeline. - -##### Args: - - -* `string_tensor`: A 1-D string tensor with the strings to produce. -* `num_epochs`: An integer (optional). If specified, `string_input_producer` - produces each string from `string_tensor` `num_epochs` times before - generating an `OutOfRange` error. If not specified, - `string_input_producer` can cycle through the strings in `string_tensor` - an unlimited number of times. -* `shuffle`: Boolean. If true, the strings are randomly shuffled within each - epoch. -* `seed`: An integer (optional). Seed used if shuffle == True. -* `capacity`: An integer. Sets the queue capacity. -* `shared_name`: (optional). If set, this queue will be shared under the given - name across multiple sessions. -* `name`: A name for the operations (optional). - -##### Returns: - - A queue with the output strings. A `QueueRunner` for the Queue - is added to the current `Graph`'s `QUEUE_RUNNER` collection. - -##### Raises: - - -* `ValueError`: If the string_tensor is a null Python list. At runtime, - will fail with an assertion if string_tensor becomes a null tensor. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.truncated_normal.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.truncated_normal.md deleted file mode 100644 index 9ae13882d3..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.truncated_normal.md +++ /dev/null @@ -1,27 +0,0 @@ -### `tf.truncated_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)` {#truncated_normal} - -Outputs random values from a truncated normal distribution. - -The generated values follow a normal distribution with specified mean and -standard deviation, except that values whose magnitude is more than 2 standard -deviations from the mean are dropped and re-picked. - -##### Args: - - -* `shape`: A 1-D integer Tensor or Python array. The shape of the output tensor. -* `mean`: A 0-D Tensor or Python value of type `dtype`. The mean of the - truncated normal distribution. -* `stddev`: A 0-D Tensor or Python value of type `dtype`. The standard deviation - of the truncated normal distribution. -* `dtype`: The type of the output. -* `seed`: A Python integer. Used to create a random seed for the distribution. - See - [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) - for behavior. -* `name`: A name for the operation (optional). - -##### Returns: - - A tensor of the specified shape filled with random truncated normal values. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.unique_with_counts.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.unique_with_counts.md new file mode 100644 index 0000000000..2d3d32d970 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.unique_with_counts.md @@ -0,0 +1,36 @@ +### `tf.unique_with_counts(x, name=None)` {#unique_with_counts} + +Finds unique elements in a 1-D tensor. + +This operation returns a tensor `y` containing all of the unique elements of `x` +sorted in the same order that they occur in `x`. This operation also returns a +tensor `idx` the same size as `x` that contains the index of each value of `x` +in the unique output `y`. Finally, it returns a third tensor `count` that +contains the count of each element of `y` in `x`. In other words: + +`y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]` + +For example: + +```prettyprint +# tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8] +y, idx, count = unique_with_counts(x) +y ==> [1, 2, 4, 7, 8] +idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4] +count ==> [2, 1, 3, 1, 2] +``` + +##### Args: + + +* `x`: A `Tensor`. 1-D. +* `name`: A name for the operation (optional). + +##### Returns: + + A tuple of `Tensor` objects (y, idx, count). + +* `y`: A `Tensor`. Has the same type as `x`. 1-D. +* `idx`: A `Tensor` of type `int32`. 1-D. +* `count`: A `Tensor` of type `int32`. 1-D. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.verify_tensor_all_finite.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.verify_tensor_all_finite.md new file mode 100644 index 0000000000..37fa105df5 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.verify_tensor_all_finite.md @@ -0,0 +1,15 @@ +### `tf.verify_tensor_all_finite(t, msg, name=None)` {#verify_tensor_all_finite} + +Assert that the tensor does not contain any NaN's or Inf's. + +##### Args: + + +* `t`: Tensor to check. +* `msg`: Message to log on failure. +* `name`: A name for this operation (optional). + +##### Returns: + + Same tensor as `t`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.DeviceSpec.from_string.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.DeviceSpec.from_string.md deleted file mode 100644 index 5cbba0ada6..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.DeviceSpec.from_string.md +++ /dev/null @@ -1,18 +0,0 @@ -#### `tf.DeviceSpec.from_string(spec)` {#DeviceSpec.from_string} - -Construct a `DeviceSpec` from a string. - -##### Args: - - -* `spec`: a string of the form - /job:/replica:/task:/device:CPU: - or - /job:/replica:/task:/device:GPU: - as cpu and gpu are mutually exclusive. - All entries are optional. - -##### Returns: - - A DeviceSpec. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.DeviceSpec.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.DeviceSpec.md deleted file mode 100644 index 18c651a45d..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.DeviceSpec.md +++ /dev/null @@ -1,146 +0,0 @@ -Represents a (possibly partial) specification for a TensorFlow device. - -`DeviceSpec`s are used throughout TensorFlow to describe where state is stored -and computations occur. Using `DeviceSpec` allows you to parse device spec -strings to verify their validity, merge them or compose them programmatically. - -Example: -```python -# Place the operations on device "GPU:0" in the "ps" job. -device_spec = DeviceSpec(job="ps", device_type="GPU", device_index=0) -with tf.device(device_spec): - # Both my_var and squared_var will be placed on /job:ps/device:GPU:0. - my_var = tf.Variable(..., name="my_variable") - squared_var = tf.square(my_var) -``` - -If a `DeviceSpec` is partially specified, it will be merged with other -`DeviceSpec`s according to the scope in which it is defined. `DeviceSpec` -components defined in inner scopes take precedence over those defined in -outer scopes. - -```python -with tf.device(DeviceSpec(job="train", )): - with tf.device(DeviceSpec(job="ps", device_type="GPU", device_index=0): - # Nodes created here will be assigned to /job:ps/device:GPU:0. - with tf.device(DeviceSpec(device_type="GPU", device_index=1): - # Nodes created here will be assigned to /job:train/device:GPU:1. -``` - -A `DeviceSpec` consists of 5 components -- each of -which is optionally specified: - -* Job: The job name. -* Replica: The replica index. -* Task: The task index. -* Device type: The device type string (e.g. "CPU" or "GPU"). -* Device index: The device index. -- - - - -#### `tf.DeviceSpec.__init__(job=None, replica=None, task=None, device_type=None, device_index=None)` {#DeviceSpec.__init__} - -Create a new `DeviceSpec` object. - -##### Args: - - -* `job`: string. Optional job name. -* `replica`: int. Optional replica index. -* `task`: int. Optional task index. -* `device_type`: Optional device type string (e.g. "CPU" or "GPU") -* `device_index`: int. Optional device index. If left - unspecified, device represents 'any' device_index. - - -- - - - -#### `tf.DeviceSpec.from_string(spec)` {#DeviceSpec.from_string} - -Construct a `DeviceSpec` from a string. - -##### Args: - - -* `spec`: a string of the form - /job:/replica:/task:/device:CPU: - or - /job:/replica:/task:/device:GPU: - as cpu and gpu are mutually exclusive. - All entries are optional. - -##### Returns: - - A DeviceSpec. - - -- - - - -#### `tf.DeviceSpec.job` {#DeviceSpec.job} - - - - -- - - - -#### `tf.DeviceSpec.merge_from(dev)` {#DeviceSpec.merge_from} - -Merge the properties of "dev" into this `DeviceSpec`. - -##### Args: - - -* `dev`: a `DeviceSpec`. - - -- - - - -#### `tf.DeviceSpec.parse_from_string(spec)` {#DeviceSpec.parse_from_string} - -Parse a `DeviceSpec` name into its components. - -##### Args: - - -* `spec`: a string of the form - /job:/replica:/task:/device:CPU: - or - /job:/replica:/task:/device:GPU: - as cpu and gpu are mutually exclusive. - All entries are optional. - -##### Returns: - - The `DeviceSpec`. - -##### Raises: - - -* `ValueError`: if the spec was not valid. - - -- - - - -#### `tf.DeviceSpec.replica` {#DeviceSpec.replica} - - - - -- - - - -#### `tf.DeviceSpec.task` {#DeviceSpec.task} - - - - -- - - - -#### `tf.DeviceSpec.to_string()` {#DeviceSpec.to_string} - -Return a string representation of this `DeviceSpec`. - -##### Returns: - - a string of the form - /job:/replica:/task:/device::. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.IdentityReader.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.IdentityReader.md deleted file mode 100644 index 46ba1e9d17..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.IdentityReader.md +++ /dev/null @@ -1,148 +0,0 @@ -A Reader that outputs the queued work as both the key and value. - -To use, enqueue strings in a Queue. Read will take the front -work string and output (work, work). - -See ReaderBase for supported methods. -- - - - -#### `tf.IdentityReader.__init__(name=None)` {#IdentityReader.__init__} - -Create a IdentityReader. - -##### Args: - - -* `name`: A name for the operation (optional). - - -- - - - -#### `tf.IdentityReader.num_records_produced(name=None)` {#IdentityReader.num_records_produced} - -Returns the number of records this reader has produced. - -This is the same as the number of Read executions that have -succeeded. - -##### Args: - - -* `name`: A name for the operation (optional). - -##### Returns: - - An int64 Tensor. - - -- - - - -#### `tf.IdentityReader.num_work_units_completed(name=None)` {#IdentityReader.num_work_units_completed} - -Returns the number of work units this reader has finished processing. - -##### Args: - - -* `name`: A name for the operation (optional). - -##### Returns: - - An int64 Tensor. - - -- - - - -#### `tf.IdentityReader.read(queue, name=None)` {#IdentityReader.read} - -Returns the next record (key, value pair) produced by a reader. - -Will dequeue a work unit from queue if necessary (e.g. when the -Reader needs to start reading from a new file since it has -finished with the previous file). - -##### Args: - - -* `queue`: A Queue or a mutable string Tensor representing a handle - to a Queue, with string work items. -* `name`: A name for the operation (optional). - -##### Returns: - - A tuple of Tensors (key, value). - -* `key`: A string scalar Tensor. -* `value`: A string scalar Tensor. - - -- - - - -#### `tf.IdentityReader.reader_ref` {#IdentityReader.reader_ref} - -Op that implements the reader. - - -- - - - -#### `tf.IdentityReader.reset(name=None)` {#IdentityReader.reset} - -Restore a reader to its initial clean state. - -##### Args: - - -* `name`: A name for the operation (optional). - -##### Returns: - - The created Operation. - - -- - - - -#### `tf.IdentityReader.restore_state(state, name=None)` {#IdentityReader.restore_state} - -Restore a reader to a previously saved state. - -Not all Readers support being restored, so this can produce an -Unimplemented error. - -##### Args: - - -* `state`: A string Tensor. - Result of a SerializeState of a Reader with matching type. -* `name`: A name for the operation (optional). - -##### Returns: - - The created Operation. - - -- - - - -#### `tf.IdentityReader.serialize_state(name=None)` {#IdentityReader.serialize_state} - -Produce a string tensor that encodes the state of a reader. - -Not all Readers support being serialized, so this can produce an -Unimplemented error. - -##### Args: - - -* `name`: A name for the operation (optional). - -##### Returns: - - A string Tensor. - - -- - - - -#### `tf.IdentityReader.supports_serialize` {#IdentityReader.supports_serialize} - -Whether the Reader implementation can serialize its state. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.InteractiveSession.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.InteractiveSession.md new file mode 100644 index 0000000000..cdb5101815 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.InteractiveSession.md @@ -0,0 +1,68 @@ +A TensorFlow `Session` for use in interactive contexts, such as a shell. + +The only difference with a regular `Session` is that an `InteractiveSession` +installs itself as the default session on construction. +The methods [`Tensor.eval()`](../../api_docs/python/framework.md#Tensor.eval) +and [`Operation.run()`](../../api_docs/python/framework.md#Operation.run) +will use that session to run ops. + +This is convenient in interactive shells and [IPython +notebooks](http://ipython.org), as it avoids having to pass an explicit +`Session` object to run ops. + +For example: + +```python +sess = tf.InteractiveSession() +a = tf.constant(5.0) +b = tf.constant(6.0) +c = a * b +# We can just use 'c.eval()' without passing 'sess' +print(c.eval()) +sess.close() +``` + +Note that a regular session installs itself as the default session when it +is created in a `with` statement. The common usage in non-interactive +programs is to follow that pattern: + +```python +a = tf.constant(5.0) +b = tf.constant(6.0) +c = a * b +with tf.Session(): + # We can also use 'c.eval()' here. + print(c.eval()) +``` + +- - - + +#### `tf.InteractiveSession.__init__(target='', graph=None, config=None)` {#InteractiveSession.__init__} + +Creates a new interactive TensorFlow session. + +If no `graph` argument is specified when constructing the session, +the default graph will be launched in the session. If you are +using more than one graph (created with `tf.Graph()` in the same +process, you will have to use different sessions for each graph, +but each graph can be used in multiple sessions. In this case, it +is often clearer to pass the graph to be launched explicitly to +the session constructor. + +##### Args: + + +* `target`: (Optional.) The execution engine to connect to. + Defaults to using an in-process engine. At present, no value + other than the empty string is supported. +* `graph`: (Optional.) The `Graph` to be launched (described above). +* `config`: (Optional) `ConfigProto` proto used to configure the session. + + +- - - + +#### `tf.InteractiveSession.close()` {#InteractiveSession.close} + +Closes an `InteractiveSession`. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.QueueBase.from_list.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.QueueBase.from_list.md new file mode 100644 index 0000000000..d9a2e7c71f --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.QueueBase.from_list.md @@ -0,0 +1,21 @@ +#### `tf.QueueBase.from_list(index, queues)` {#QueueBase.from_list} + +Create a queue using the queue reference from `queues[index]`. + +##### Args: + + +* `index`: An integer scalar tensor that determines the input that gets + selected. +* `queues`: A list of `QueueBase` objects. + +##### Returns: + + A `QueueBase` object. + +##### Raises: + + +* `TypeError`: When `queues` is not a list of `QueueBase` objects, + or when the data types of `queues` are not all the same. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.RandomShuffleQueue.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.RandomShuffleQueue.md deleted file mode 100644 index cd617e7578..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.RandomShuffleQueue.md +++ /dev/null @@ -1,54 +0,0 @@ -A queue implementation that dequeues elements in a random order. - -See [`tf.QueueBase`](#QueueBase) for a description of the methods on -this class. - -- - - - -#### `tf.RandomShuffleQueue.__init__(capacity, min_after_dequeue, dtypes, shapes=None, names=None, seed=None, shared_name=None, name='random_shuffle_queue')` {#RandomShuffleQueue.__init__} - -Create a queue that dequeues elements in a random order. - -A `RandomShuffleQueue` has bounded capacity; supports multiple -concurrent producers and consumers; and provides exactly-once -delivery. - -A `RandomShuffleQueue` holds a list of up to `capacity` -elements. Each element is a fixed-length tuple of tensors whose -dtypes are described by `dtypes`, and whose shapes are optionally -described by the `shapes` argument. - -If the `shapes` argument is specified, each component of a queue -element must have the respective fixed shape. If it is -unspecified, different queue elements may have different shapes, -but the use of `dequeue_many` is disallowed. - -The `min_after_dequeue` argument allows the caller to specify a -minimum number of elements that will remain in the queue after a -`dequeue` or `dequeue_many` operation completes, to ensure a -minimum level of mixing of elements. This invariant is maintained -by blocking those operations until sufficient elements have been -enqueued. The `min_after_dequeue` argument is ignored after the -queue has been closed. - -##### Args: - - -* `capacity`: An integer. The upper bound on the number of elements - that may be stored in this queue. -* `min_after_dequeue`: An integer (described above). -* `dtypes`: A list of `DType` objects. The length of `dtypes` must equal - the number of tensors in each queue element. -* `shapes`: (Optional.) A list of fully-defined `TensorShape` objects - with the same length as `dtypes`, or `None`. -* `names`: (Optional.) A list of string naming the components in the queue - with the same length as `dtypes`, or `None`. If specified the dequeue - methods return a dictionary with the names as keys. -* `seed`: A Python integer. Used to create a random seed. See - [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) - for behavior. -* `shared_name`: (Optional.) If non-empty, this queue will be shared under - the given name across multiple sessions. -* `name`: Optional name for the queue operation. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.RegisterShape.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.RegisterShape.md deleted file mode 100644 index e3bb956f88..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.RegisterShape.md +++ /dev/null @@ -1,27 +0,0 @@ -A decorator for registering the shape function for an op type. - -This decorator is only used when defining a new op type. A shape -function is a function from an `Operation` object to a list of -`TensorShape` objects, with one `TensorShape` for each output of the -operation. - -For example, assuming that operations of type `"Sub"` take two -inputs `x` and `y`, and return a single output `x - y`, all with the -same shape, the following shape function would be registered: - -```python -@tf.RegisterShape("Sub") -def _sub_shape(op): - return [op.inputs[0].get_shape().merge_with(op.inputs[1].get_shape())] -``` - -The decorator argument `op_type` is the string type of an -operation. This corresponds to the `OpDef.name` field for the proto -that defines the operation. -- - - - -#### `tf.RegisterShape.__init__(op_type)` {#RegisterShape.__init__} - -Saves the `op_type` as the `Operation` type. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.Session.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.Session.md deleted file mode 100644 index 62982698dd..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.Session.md +++ /dev/null @@ -1,236 +0,0 @@ -A class for running TensorFlow operations. - -A `Session` object encapsulates the environment in which `Operation` -objects are executed, and `Tensor` objects are evaluated. For -example: - -```python -# Build a graph. -a = tf.constant(5.0) -b = tf.constant(6.0) -c = a * b - -# Launch the graph in a session. -sess = tf.Session() - -# Evaluate the tensor `c`. -print(sess.run(c)) -``` - -A session may own resources, such as -[variables](../../api_docs/python/state_ops.md#Variable), [queues](../../api_docs/python/io_ops.md#QueueBase), -and [readers](../../api_docs/python/io_ops.md#ReaderBase). It is important to release -these resources when they are no longer required. To do this, either -invoke the [`close()`](#Session.close) method on the session, or use -the session as a context manager. The following two examples are -equivalent: - -```python -# Using the `close()` method. -sess = tf.Session() -sess.run(...) -sess.close() - -# Using the context manager. -with tf.Session() as sess: - sess.run(...) -``` - -The [`ConfigProto`] -(https://www.tensorflow.org/code/tensorflow/core/protobuf/config.proto) -protocol buffer exposes various configuration options for a -session. For example, to create a session that uses soft constraints -for device placement, and log the resulting placement decisions, -create a session as follows: - -```python -# Launch the graph in a session that allows soft device placement and -# logs the placement decisions. -sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True, - log_device_placement=True)) -``` - -- - - - -#### `tf.Session.__init__(target='', graph=None, config=None)` {#Session.__init__} - -Creates a new TensorFlow session. - -If no `graph` argument is specified when constructing the session, -the default graph will be launched in the session. If you are -using more than one graph (created with `tf.Graph()` in the same -process, you will have to use different sessions for each graph, -but each graph can be used in multiple sessions. In this case, it -is often clearer to pass the graph to be launched explicitly to -the session constructor. - -##### Args: - - -* `target`: (Optional.) The execution engine to connect to. - Defaults to using an in-process engine. At present, no value - other than the empty string is supported. -* `graph`: (Optional.) The `Graph` to be launched (described above). -* `config`: (Optional.) A [`ConfigProto`](https://www.tensorflow.org/code/tensorflow/core/protobuf/config.proto) - protocol buffer with configuration options for the session. - - -- - - - -#### `tf.Session.run(fetches, feed_dict=None, options=None, run_metadata=None)` {#Session.run} - -Runs the operations and evaluates the tensors in `fetches`. - -This method runs one "step" of TensorFlow computation, by -running the necessary graph fragment to execute every `Operation` -and evaluate every `Tensor` in `fetches`, substituting the values in -`feed_dict` for the corresponding input values. - -The `fetches` argument may be a list of graph elements or a single -graph element, and these determine the return value of this -method. A graph element can be one of the following types: - -* If the *i*th element of `fetches` is an - [`Operation`](../../api_docs/python/framework.md#Operation), the *i*th - return value will be `None`. -* If the *i*th element of `fetches` is a - [`Tensor`](../../api_docs/python/framework.md#Tensor), the *i*th return - value will be a numpy ndarray containing the value of that tensor. -* If the *i*th element of `fetches` is a - [`SparseTensor`](../../api_docs/python/sparse_ops.md#SparseTensor), - the *i*th return value will be a - [`SparseTensorValue`](../../api_docs/python/sparse_ops.md#SparseTensorValue) - containing the value of that sparse tensor. -* If the *i*th element of `fetches` is produced by a `get_tensor_handle` op, - the *i*th return value will be a numpy ndarray containing the handle of - that tensor. - -The optional `feed_dict` argument allows the caller to override -the value of tensors in the graph. Each key in `feed_dict` can be -one of the following types: - -* If the key is a [`Tensor`](../../api_docs/python/framework.md#Tensor), the - value may be a Python scalar, string, list, or numpy ndarray - that can be converted to the same `dtype` as that - tensor. Additionally, if the key is a - [placeholder](../../api_docs/python/io_ops.md#placeholder), the shape of - the value will be checked for compatibility with the placeholder. -* If the key is a - [`SparseTensor`](../../api_docs/python/sparse_ops.md#SparseTensor), - the value should be a - [`SparseTensorValue`](../../api_docs/python/sparse_ops.md#SparseTensorValue). - -Each value in `feed_dict` must be convertible to a numpy array of the dtype -of the corresponding key. - -The optional `options` argument expects a [`RunOptions`] proto. The options -allow controlling the behavior of this particular step (e.g. turning tracing -on). - -The optional `run_metadata` argument expects a [`RunMetadata`] proto. When -appropriate, the non-Tensor output of this step will be collected there. For -example, when users turn on tracing in `options`, the profiled info will be -collected into this argument and passed back. - -##### Args: - - -* `fetches`: A single graph element, or a list of graph elements - (described above). -* `feed_dict`: A dictionary that maps graph elements to values - (described above). -* `options`: A [`RunOptions`] protocol buffer -* `run_metadata`: A [`RunMetadata`] protocol buffer - -##### Returns: - - Either a single value if `fetches` is a single graph element, or - a list of values if `fetches` is a list (described above). - -##### Raises: - - -* `RuntimeError`: If this `Session` is in an invalid state (e.g. has been - closed). -* `TypeError`: If `fetches` or `feed_dict` keys are of an inappropriate type. -* `ValueError`: If `fetches` or `feed_dict` keys are invalid or refer to a - `Tensor` that doesn't exist. - - -- - - - -#### `tf.Session.close()` {#Session.close} - -Closes this session. - -Calling this method frees all resources associated with the session. - -##### Raises: - - tf.errors.OpError: Or one of its subclasses if an error occurs while - closing the TensorFlow session. - - - -- - - - -#### `tf.Session.graph` {#Session.graph} - -The graph that was launched in this session. - - - -- - - - -#### `tf.Session.as_default()` {#Session.as_default} - -Returns a context manager that makes this object the default session. - -Use with the `with` keyword to specify that calls to -[`Operation.run()`](../../api_docs/python/framework.md#Operation.run) or -[`Tensor.run()`](../../api_docs/python/framework.md#Tensor.run) should be -executed in this session. - -```python -c = tf.constant(..) -sess = tf.Session() - -with sess.as_default(): - assert tf.get_default_session() is sess - print(c.eval()) -``` - -To get the current default session, use -[`tf.get_default_session()`](#get_default_session). - - -*N.B.* The `as_default` context manager *does not* close the -session when you exit the context, and you must close the session -explicitly. - -```python -c = tf.constant(...) -sess = tf.Session() -with sess.as_default(): - print(c.eval()) -# ... -with sess.as_default(): - print(c.eval()) - -sess.close() -``` - -Alternatively, you can use `with tf.Session():` to create a -session that is automatically closed on exiting the context, -including when an uncaught exception is raised. - -*N.B.* The default graph is a property of the current thread. If you -create a new thread, and wish to use the default session in that -thread, you must explicitly add a `with sess.as_default():` in that -thread's function. - -##### Returns: - - A context manager using this session as the default session. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.TFRecordReader.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.TFRecordReader.md new file mode 100644 index 0000000000..31c6ffacfb --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.TFRecordReader.md @@ -0,0 +1,145 @@ +A Reader that outputs the records from a TFRecords file. + +See ReaderBase for supported methods. +- - - + +#### `tf.TFRecordReader.__init__(name=None)` {#TFRecordReader.__init__} + +Create a TFRecordReader. + +##### Args: + + +* `name`: A name for the operation (optional). + + +- - - + +#### `tf.TFRecordReader.num_records_produced(name=None)` {#TFRecordReader.num_records_produced} + +Returns the number of records this reader has produced. + +This is the same as the number of Read executions that have +succeeded. + +##### Args: + + +* `name`: A name for the operation (optional). + +##### Returns: + + An int64 Tensor. + + +- - - + +#### `tf.TFRecordReader.num_work_units_completed(name=None)` {#TFRecordReader.num_work_units_completed} + +Returns the number of work units this reader has finished processing. + +##### Args: + + +* `name`: A name for the operation (optional). + +##### Returns: + + An int64 Tensor. + + +- - - + +#### `tf.TFRecordReader.read(queue, name=None)` {#TFRecordReader.read} + +Returns the next record (key, value pair) produced by a reader. + +Will dequeue a work unit from queue if necessary (e.g. when the +Reader needs to start reading from a new file since it has +finished with the previous file). + +##### Args: + + +* `queue`: A Queue or a mutable string Tensor representing a handle + to a Queue, with string work items. +* `name`: A name for the operation (optional). + +##### Returns: + + A tuple of Tensors (key, value). + +* `key`: A string scalar Tensor. +* `value`: A string scalar Tensor. + + +- - - + +#### `tf.TFRecordReader.reader_ref` {#TFRecordReader.reader_ref} + +Op that implements the reader. + + +- - - + +#### `tf.TFRecordReader.reset(name=None)` {#TFRecordReader.reset} + +Restore a reader to its initial clean state. + +##### Args: + + +* `name`: A name for the operation (optional). + +##### Returns: + + The created Operation. + + +- - - + +#### `tf.TFRecordReader.restore_state(state, name=None)` {#TFRecordReader.restore_state} + +Restore a reader to a previously saved state. + +Not all Readers support being restored, so this can produce an +Unimplemented error. + +##### Args: + + +* `state`: A string Tensor. + Result of a SerializeState of a Reader with matching type. +* `name`: A name for the operation (optional). + +##### Returns: + + The created Operation. + + +- - - + +#### `tf.TFRecordReader.serialize_state(name=None)` {#TFRecordReader.serialize_state} + +Produce a string tensor that encodes the state of a reader. + +Not all Readers support being serialized, so this can produce an +Unimplemented error. + +##### Args: + + +* `name`: A name for the operation (optional). + +##### Returns: + + A string Tensor. + + +- - - + +#### `tf.TFRecordReader.supports_serialize` {#TFRecordReader.supports_serialize} + +Whether the Reader implementation can serialize its state. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.Variable.from_proto.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.Variable.from_proto.md new file mode 100644 index 0000000000..5b10d329bc --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.Variable.from_proto.md @@ -0,0 +1,4 @@ +#### `tf.Variable.from_proto(variable_def)` {#Variable.from_proto} + +Returns a `Variable` object created from `variable_def`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.WholeFileReader.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.WholeFileReader.md deleted file mode 100644 index e168cabc9e..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.WholeFileReader.md +++ /dev/null @@ -1,148 +0,0 @@ -A Reader that outputs the entire contents of a file as a value. - -To use, enqueue filenames in a Queue. The output of Read will -be a filename (key) and the contents of that file (value). - -See ReaderBase for supported methods. -- - - - -#### `tf.WholeFileReader.__init__(name=None)` {#WholeFileReader.__init__} - -Create a WholeFileReader. - -##### Args: - - -* `name`: A name for the operation (optional). - - -- - - - -#### `tf.WholeFileReader.num_records_produced(name=None)` {#WholeFileReader.num_records_produced} - -Returns the number of records this reader has produced. - -This is the same as the number of Read executions that have -succeeded. - -##### Args: - - -* `name`: A name for the operation (optional). - -##### Returns: - - An int64 Tensor. - - -- - - - -#### `tf.WholeFileReader.num_work_units_completed(name=None)` {#WholeFileReader.num_work_units_completed} - -Returns the number of work units this reader has finished processing. - -##### Args: - - -* `name`: A name for the operation (optional). - -##### Returns: - - An int64 Tensor. - - -- - - - -#### `tf.WholeFileReader.read(queue, name=None)` {#WholeFileReader.read} - -Returns the next record (key, value pair) produced by a reader. - -Will dequeue a work unit from queue if necessary (e.g. when the -Reader needs to start reading from a new file since it has -finished with the previous file). - -##### Args: - - -* `queue`: A Queue or a mutable string Tensor representing a handle - to a Queue, with string work items. -* `name`: A name for the operation (optional). - -##### Returns: - - A tuple of Tensors (key, value). - -* `key`: A string scalar Tensor. -* `value`: A string scalar Tensor. - - -- - - - -#### `tf.WholeFileReader.reader_ref` {#WholeFileReader.reader_ref} - -Op that implements the reader. - - -- - - - -#### `tf.WholeFileReader.reset(name=None)` {#WholeFileReader.reset} - -Restore a reader to its initial clean state. - -##### Args: - - -* `name`: A name for the operation (optional). - -##### Returns: - - The created Operation. - - -- - - - -#### `tf.WholeFileReader.restore_state(state, name=None)` {#WholeFileReader.restore_state} - -Restore a reader to a previously saved state. - -Not all Readers support being restored, so this can produce an -Unimplemented error. - -##### Args: - - -* `state`: A string Tensor. - Result of a SerializeState of a Reader with matching type. -* `name`: A name for the operation (optional). - -##### Returns: - - The created Operation. - - -- - - - -#### `tf.WholeFileReader.serialize_state(name=None)` {#WholeFileReader.serialize_state} - -Produce a string tensor that encodes the state of a reader. - -Not all Readers support being serialized, so this can produce an -Unimplemented error. - -##### Args: - - -* `name`: A name for the operation (optional). - -##### Returns: - - A string Tensor. - - -- - - - -#### `tf.WholeFileReader.supports_serialize` {#WholeFileReader.supports_serialize} - -Whether the Reader implementation can serialize its state. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.abs.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.abs.md deleted file mode 100644 index 63a0b4c954..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.abs.md +++ /dev/null @@ -1,22 +0,0 @@ -### `tf.abs(x, name=None)` {#abs} - -Computes the absolute value of a tensor. - -Given a tensor of real numbers `x`, this operation returns a tensor -containing the absolute value of each element in `x`. For example, if x is -an input element and y is an output element, this operation computes -\\(y = |x|\\). - -See [`tf.complex_abs()`](#tf_complex_abs) to compute the absolute value of a complex -number. - -##### Args: - - -* `x`: A `Tensor` of type `float`, `double`, `int32`, or `int64`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` the same size and type as `x` with absolute values. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.all_variables.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.all_variables.md new file mode 100644 index 0000000000..904b99f321 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.all_variables.md @@ -0,0 +1,12 @@ +### `tf.all_variables()` {#all_variables} + +Returns all variables that must be saved/restored. + +The `Variable()` constructor automatically adds new variables to the graph +collection `GraphKeys.VARIABLES`. This convenience function returns the +contents of that collection. + +##### Returns: + + A list of `Variable` objects. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.assert_less_equal.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.assert_less_equal.md new file mode 100644 index 0000000000..d740746a61 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.assert_less_equal.md @@ -0,0 +1,35 @@ +### `tf.assert_less_equal(x, y, data=None, summarize=None, name=None)` {#assert_less_equal} + +Assert the condition `x <= y` holds element-wise. + +Example of adding a dependency to an operation: + +```python +with tf.control_dependencies([tf.assert_less_equal(x, y)]): + output = tf.reduce_sum(x) +``` + +Example of adding dependency to the tensor being checked: + +```python +x = tf.with_dependencies([tf.assert_less_equal(x, y)], x) +``` + +This condition holds if for every pair of (possibly broadcast) elements +`x[i]`, `y[i]`, we have `x[i] <= y[i]`. +If both `x` and `y` are empty, this is trivially satisfied. + +##### Args: + + +* `x`: Numeric `Tensor`. +* `y`: Numeric `Tensor`, same dtype as and broadcastable to `x`. +* `data`: The tensors to print out if the condition is False. Defaults to + error message and first few entries of `x`, `y`. +* `summarize`: Print this many entries of each tensor. +* `name`: A name for this operation (optional). Defaults to "assert_less_equal" + +##### Returns: + + Op that raises `InvalidArgumentError` if `x <= y` is False. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.batch_cholesky_solve.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.batch_cholesky_solve.md deleted file mode 100644 index 25fcc5c908..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.batch_cholesky_solve.md +++ /dev/null @@ -1,35 +0,0 @@ -### `tf.batch_cholesky_solve(chol, rhs, name=None)` {#batch_cholesky_solve} - -Solve batches of linear eqns `A X = RHS`, given Cholesky factorizations. - -```python -# Solve one linear system (K = 1) for every member of the length 10 batch. -A = ... # shape 10 x 2 x 2 -RHS = ... # shape 10 x 2 x 1 -chol = tf.batch_cholesky(A) # shape 10 x 2 x 2 -X = tf.batch_cholesky_solve(chol, RHS) # shape 10 x 2 x 1 -# tf.matmul(A, X) ~ RHS -X[3, :, 0] # Solution to the linear system A[3, :, :] x = RHS[3, :, 0] - -# Solve five linear systems (K = 5) for every member of the length 10 batch. -A = ... # shape 10 x 2 x 2 -RHS = ... # shape 10 x 2 x 5 -... -X[3, :, 2] # Solution to the linear system A[3, :, :] x = RHS[3, :, 2] -``` - -##### Args: - - -* `chol`: A `Tensor`. Must be `float32` or `float64`, shape is `[..., M, M]`. - Cholesky factorization of `A`, e.g. `chol = tf.batch_cholesky(A)`. - For that reason, only the lower triangular parts (including the diagonal) - of the last two dimensions of `chol` are used. The strictly upper part is - assumed to be zero and not accessed. -* `rhs`: A `Tensor`, same type as `chol`, shape is `[..., M, K]`. -* `name`: A name to give this `Op`. Defaults to `batch_cholesky_solve`. - -##### Returns: - - Solution to `A x = rhs`, shape `[..., M, K]`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.batch_fft.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.batch_fft.md new file mode 100644 index 0000000000..c2ea3aa9c1 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.batch_fft.md @@ -0,0 +1,18 @@ +### `tf.batch_fft(input, name=None)` {#batch_fft} + +Compute the 1-dimensional discrete Fourier Transform over the inner-most + +dimension of `input`. + +##### Args: + + +* `input`: A `Tensor` of type `complex64`. A complex64 tensor. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `complex64`. + A complex64 tensor of the same shape as `input`. The inner-most + dimension of `input` is replaced with its 1D Fourier Transform. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.batch_ifft3d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.batch_ifft3d.md deleted file mode 100644 index 1173a17d6d..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.batch_ifft3d.md +++ /dev/null @@ -1,18 +0,0 @@ -### `tf.batch_ifft3d(input, name=None)` {#batch_ifft3d} - -Compute the inverse 3-dimensional discrete Fourier Transform over the inner-most - -3 dimensions of `input`. - -##### Args: - - -* `input`: A `Tensor` of type `complex64`. A complex64 tensor. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `complex64`. - A complex64 tensor of the same shape as `input`. The inner-most 3 - dimensions of `input` are replaced with their inverse 3D Fourier Transform. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.batch_matrix_inverse.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.batch_matrix_inverse.md deleted file mode 100644 index 231056a05c..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.batch_matrix_inverse.md +++ /dev/null @@ -1,28 +0,0 @@ -### `tf.batch_matrix_inverse(input, adjoint=None, name=None)` {#batch_matrix_inverse} - -Calculates the inverse of square invertible matrices or their adjoints - -(conjugate transposes). - -The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions -form square matrices. The output is a tensor of the same shape as the input -containing the inverse for all input submatrices `[..., :, :]`. - -The op uses LU decomposition with partial pivoting to compute the inverses. - -If a matrix is not invertible there is no guarantee what the op does. It -may detect the condition and raise an exception or it may simply return a -garbage result. - -##### Args: - - -* `input`: A `Tensor`. Must be one of the following types: `float64`, `float32`. - Shape is `[..., M, M]`. -* `adjoint`: An optional `bool`. Defaults to `False`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `input`. Shape is `[..., M, M]`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.batch_self_adjoint_eig.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.batch_self_adjoint_eig.md deleted file mode 100644 index 19d6c5319f..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.batch_self_adjoint_eig.md +++ /dev/null @@ -1,22 +0,0 @@ -### `tf.batch_self_adjoint_eig(input, name=None)` {#batch_self_adjoint_eig} - -Calculates the Eigen Decomposition of a batch of square self-adjoint matrices. - -The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions -form square matrices, with the same constraints as the single matrix -SelfAdjointEig. - -The result is a '[..., M+1, M] matrix with [..., 0,:] containing the -eigenvalues, and subsequent [...,1:, :] containing the eigenvectors. - -##### Args: - - -* `input`: A `Tensor`. Must be one of the following types: `float64`, `float32`. - Shape is `[..., M, M]`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `input`. Shape is `[..., M+1, M]`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.bitcast.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.bitcast.md deleted file mode 100644 index 4ded707a89..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.bitcast.md +++ /dev/null @@ -1,25 +0,0 @@ -### `tf.bitcast(input, type, name=None)` {#bitcast} - -Bitcasts a tensor from one type to another without copying data. - -Given a tensor `input`, this operation returns a tensor that has the same buffer -data as `input` with datatype `type`. - -If the input datatype `T` is larger than the output datatype `type` then the -shape changes from [...] to [..., sizeof(`T`)/sizeof(`type`)]. - -If `T` is smaller than `type`, the operator requires that the rightmost -dimension be equal to sizeof(`type`)/sizeof(`T`). The shape then goes from -[..., sizeof(`type`)/sizeof(`T`)] to [...]. - -##### Args: - - -* `input`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`. -* `type`: A `tf.DType` from: `tf.float32, tf.float64, tf.int64, tf.int32, tf.uint8, tf.uint16, tf.int16, tf.int8, tf.complex64, tf.complex128, tf.qint8, tf.quint8, tf.qint32, tf.half`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `type`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.boolean_mask.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.boolean_mask.md deleted file mode 100644 index e893b8ee63..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.boolean_mask.md +++ /dev/null @@ -1,43 +0,0 @@ -### `tf.boolean_mask(tensor, mask, name='boolean_mask')` {#boolean_mask} - -Apply boolean mask to tensor. Numpy equivalent is `tensor[mask]`. - -```python -# 1-D example -tensor = [0, 1, 2, 3] -mask = [True, False, True, False] -boolean_mask(tensor, mask) ==> [0, 2] -``` - -In general, `0 < dim(mask) = K <= dim(tensor)`, and `mask`'s shape must match -the first K dimensions of `tensor`'s shape. We then have: - `boolean_mask(tensor, mask)[i, j1,...,jd] = tensor[i1,...,iK,j1,...,jd]` -where `(i1,...,iK)` is the ith `True` entry of `mask` (row-major order). - -##### Args: - - -* `tensor`: N-D tensor. -* `mask`: K-D boolean tensor, K <= N and K must be known statically. -* `name`: A name for this operation (optional). - -##### Returns: - - Tensor populated by entries in `tensor` corresponding to `True` values in - `mask`. - -##### Raises: - - -* `ValueError`: If shapes do not conform. - - -* `Examples`: - -```python -# 2-D example -tensor = [[1, 2], [3, 4], [5, 6]] -mask = [True, False, True] -boolean_mask(tensor, mask) ==> [[1, 2], [5, 6]] -``` - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.ceil.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.ceil.md deleted file mode 100644 index 34e4a7feed..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.ceil.md +++ /dev/null @@ -1,14 +0,0 @@ -### `tf.ceil(x, name=None)` {#ceil} - -Returns element-wise smallest integer in not less than x. - -##### Args: - - -* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `x`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.check_numerics.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.check_numerics.md deleted file mode 100644 index 46a8f6f7db..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.check_numerics.md +++ /dev/null @@ -1,18 +0,0 @@ -### `tf.check_numerics(tensor, message, name=None)` {#check_numerics} - -Checks a tensor for NaN and Inf values. - -When run, reports an `InvalidArgument` error if `tensor` has any values -that are not a number (NaN) or infinity (Inf). Otherwise, passes `tensor` as-is. - -##### Args: - - -* `tensor`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`. -* `message`: A `string`. Prefix of the error message. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `tensor`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.cholesky_solve.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.cholesky_solve.md new file mode 100644 index 0000000000..7445d3f929 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.cholesky_solve.md @@ -0,0 +1,35 @@ +### `tf.cholesky_solve(chol, rhs, name=None)` {#cholesky_solve} + +Solve linear equations `A X = RHS`, given Cholesky factorization of `A`. + +```python +# Solve one system of linear equations (K = 1). +A = [[3, 1], [1, 3]] +RHS = [[2], [22]] # shape 2 x 1 +chol = tf.cholesky(A) +X = tf.cholesky_solve(chol, RHS) +# tf.matmul(A, X) ~ RHS +X[:, 0] # Solution to the linear system A x = RHS[:, 0] + +# Solve five systems of linear equations (K = 5). +A = [[3, 1], [1, 3]] +RHS = [[1, 2, 3, 4, 5], [11, 22, 33, 44, 55]] # shape 2 x 5 +... +X[:, 2] # Solution to the linear system A x = RHS[:, 2] +``` + +##### Args: + + +* `chol`: A `Tensor`. Must be `float32` or `float64`, shape is `[M, M]`. + Cholesky factorization of `A`, e.g. `chol = tf.cholesky(A)`. For that + reason, only the lower triangular part (including the diagonal) of `chol` + is used. The strictly upper part is assumed to be zero and not accessed. +* `rhs`: A `Tensor`, same type as `chol`, shape is `[M, K]`, designating `K` + systems of linear equations. +* `name`: A name to give this `Op`. Defaults to `cholesky_solve`. + +##### Returns: + + Solution to `A X = RHS`, shape `[M, K]`. The solutions to the `K` systems. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.complex.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.complex.md deleted file mode 100644 index 55487ea170..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.complex.md +++ /dev/null @@ -1,30 +0,0 @@ -### `tf.complex(real, imag, name=None)` {#complex} - -Converts two real numbers to a complex number. - -Given a tensor `real` representing the real part of a complex number, and a -tensor `imag` representing the imaginary part of a complex number, this -operation returns complex numbers elementwise of the form \(a + bj\), where -*a* represents the `real` part and *b* represents the `imag` part. - -The input tensors `real` and `imag` must have the same shape. - -For example: - -``` -# tensor 'real' is [2.25, 3.25] -# tensor `imag` is [4.75, 5.75] -tf.complex(real, imag) ==> [[2.25 + 4.75j], [3.25 + 5.75j]] -``` - -##### Args: - - -* `real`: A `Tensor`. Must be one of the following types: `float32`, `float64`. -* `imag`: A `Tensor`. Must have the same type as `real`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `complex64` or `complex128`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.complex_abs.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.complex_abs.md new file mode 100644 index 0000000000..1cb76668d6 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.complex_abs.md @@ -0,0 +1,26 @@ +### `tf.complex_abs(x, name=None)` {#complex_abs} + +Computes the complex absolute value of a tensor. + +Given a tensor `x` of complex numbers, this operation returns a tensor of type +`float` or `double` that is the absolute value of each element in `x`. All +elements in `x` must be complex numbers of the form \\(a + bj\\). The +absolute value is computed as \\( \sqrt{a^2 + b^2}\\). + +For example: + +``` +# tensor 'x' is [[-2.25 + 4.75j], [-3.25 + 5.75j]] +tf.complex_abs(x) ==> [5.25594902, 6.60492229] +``` + +##### Args: + + +* `x`: A `Tensor` of type `complex64` or `complex128`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `float32` or `float64`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.cond.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.cond.md new file mode 100644 index 0000000000..6e6a9a69bf --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.cond.md @@ -0,0 +1,54 @@ +### `tf.cond(pred, fn1, fn2, name=None)` {#cond} + +Return either fn1() or fn2() based on the boolean predicate `pred`. + +`fn1` and `fn2` both return lists of output tensors. `fn1` and `fn2` must have +the same non-zero number and type of outputs. + +Note that the conditional execution applies only to the operations defined in +fn1 and fn2. Consider the following simple program: + +```python +z = tf.mul(a, b) +result = tf.cond(x < y, lambda: tf.add(x, z), lambda: tf.square(y)) +``` + +If x < y, the tf.add operation will be executed and tf.square +operation will not be executed. Since z is needed for at least one +branch of the cond, the tf.mul operation is always executed, unconditionally. +Although this behavior is consistent with the dataflow model of TensorFlow, +it has occasionally surprised some users who expected a lazier semantics. + +##### Args: + + +* `pred`: A scalar determining whether to return the result of `fn1` or `fn2`. +* `fn1`: The callable to be performed if pred is true. +* `fn2`: The callable to be performed if pref is false. +* `name`: Optional name prefix for the returned tensors. + +##### Returns: + + Tensors returned by the call to either `fn1` or `fn2`. If the callables + return a singleton list, the element is extracted from the list. + +##### Raises: + + +* `TypeError`: if `fn1` or `fn2` is not callable. +* `ValueError`: if `fn1` and `fn2` do not return the same number of tensors, or + return tensors of different types. + + +* `Example`: + +```python + x = tf.constant(2) + y = tf.constant(5) + def f1(): return tf.mul(x, 17) + def f2(): return tf.add(y, 23) + r = cond(tf.less(x, y), f1, f2) + # r is set to f1(). + # Operations in f2 (e.g., tf.add) are not executed. +``` + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.conj.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.conj.md new file mode 100644 index 0000000000..6df004b0cd --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.conj.md @@ -0,0 +1,28 @@ +### `tf.conj(input, name=None)` {#conj} + +Returns the complex conjugate of a complex number. + +Given a tensor `input` of complex numbers, this operation returns a tensor of +complex numbers that are the complex conjugate of each element in `input`. The +complex numbers in `input` must be of the form \\(a + bj\\), where *a* is the +real part and *b* is the imaginary part. + +The complex conjugate returned by this operation is of the form \\(a - bj\\). + +For example: + +``` +# tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j] +tf.conj(input) ==> [-2.25 - 4.75j, 3.25 - 5.75j] +``` + +##### Args: + + +* `input`: A `Tensor`. Must be one of the following types: `complex64`, `complex128`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `input`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.constant.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.constant.md new file mode 100644 index 0000000000..ff34b6eeb1 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.constant.md @@ -0,0 +1,50 @@ +### `tf.constant(value, dtype=None, shape=None, name='Const')` {#constant} + +Creates a constant tensor. + + The resulting tensor is populated with values of type `dtype`, as + specified by arguments `value` and (optionally) `shape` (see examples + below). + + The argument `value` can be a constant value, or a list of values of type + `dtype`. If `value` is a list, then the length of the list must be less + than or equal to the number of elements implied by the `shape` argument (if + specified). In the case where the list length is less than the number of + elements specified by `shape`, the last element in the list will be used + to fill the remaining entries. + + The argument `shape` is optional. If present, it specifies the dimensions of + the resulting tensor. If not present, the shape of `value` is used. + + If the argument `dtype` is not specified, then the type is inferred from + the type of `value`. + + For example: + + ```python + # Constant 1-D Tensor populated with value list. + tensor = tf.constant([1, 2, 3, 4, 5, 6, 7]) => [1 2 3 4 5 6 7] + + # Constant 2-D tensor populated with scalar value -1. + tensor = tf.constant(-1.0, shape=[2, 3]) => [[-1. -1. -1.] + [-1. -1. -1.]] + ``` + +##### Args: + + +* `value`: A constant value (or list) of output type `dtype`. + + +* `dtype`: The type of the elements of the resulting tensor. + + +* `shape`: Optional dimensions of resulting tensor. + + +* `name`: Optional name for the tensor. + +##### Returns: + + A Constant Tensor. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.distributions.MultivariateNormal.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.distributions.MultivariateNormal.md new file mode 100644 index 0000000000..258cb03ea8 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.distributions.MultivariateNormal.md @@ -0,0 +1,218 @@ +The Multivariate Normal distribution on `R^k`. + +The distribution has mean and covariance parameters mu (1-D), sigma (2-D), +or alternatively mean `mu` and factored covariance (cholesky decomposed +`sigma`) called `sigma_chol`. + +#### Mathematical details + +The PDF of this distribution is: + +``` +f(x) = (2*pi)^(-k/2) |det(sigma)|^(-1/2) exp(-1/2*(x-mu)^*.sigma^{-1}.(x-mu)) +``` + +where `.` denotes the inner product on `R^k` and `^*` denotes transpose. + +Alternatively, if `sigma` is positive definite, it can be represented in terms +of its lower triangular cholesky factorization + +```sigma = sigma_chol . sigma_chol^*``` + +and the pdf above allows simpler computation: + +``` +|det(sigma)| = reduce_prod(diag(sigma_chol))^2 +x_whitened = sigma^{-1/2} . (x - mu) = tri_solve(sigma_chol, x - mu) +(x-mu)^* .sigma^{-1} . (x-mu) = x_whitened^* . x_whitened +``` + +where `tri_solve()` solves a triangular system of equations. + +#### Examples + +A single multi-variate Gaussian distribution is defined by a vector of means +of length `k`, and a covariance matrix of shape `k x k`. + +Extra leading dimensions, if provided, allow for batches. + +```python +# Initialize a single 3-variate Gaussian with diagonal covariance. +mu = [1, 2, 3] +sigma = [[1, 0, 0], [0, 3, 0], [0, 0, 2]] +dist = tf.contrib.distributions.MultivariateNormal(mu=mu, sigma=sigma) + +# Evaluate this on an observation in R^3, returning a scalar. +dist.pdf([-1, 0, 1]) + +# Initialize a batch of two 3-variate Gaussians. +mu = [[1, 2, 3], [11, 22, 33]] +sigma = ... # shape 2 x 3 x 3 +dist = tf.contrib.distributions.MultivariateNormal(mu=mu, sigma=sigma) + +# Evaluate this on a two observations, each in R^3, returning a length two +# tensor. +x = [[-1, 0, 1], [-11, 0, 11]] # Shape 2 x 3. +dist.pdf(x) +``` +- - - + +#### `tf.contrib.distributions.MultivariateNormal.__init__(mu, sigma=None, sigma_chol=None, name=None)` {#MultivariateNormal.__init__} + +Multivariate Normal distributions on `R^k`. + +User must provide means `mu`, which are tensors of rank `N+1` (`N >= 0`) +with the last dimension having length `k`. + +User must provide exactly one of `sigma` (the covariance matrices) or +`sigma_chol` (the cholesky decompositions of the covariance matrices). +`sigma` or `sigma_chol` must be of rank `N+2`. The last two dimensions +must both have length `k`. The first `N` dimensions correspond to batch +indices. + +If `sigma_chol` is not provided, the batch cholesky factorization of `sigma` +is calculated for you. + +The shapes of `mu` and `sigma` must match for the first `N` dimensions. + +Regardless of which parameter is provided, the covariance matrices must all +be **positive definite** (an error is raised if one of them is not). + +##### Args: + + +* `mu`: (N+1)-D. `float` or `double` tensor, the means of the distributions. +* `sigma`: (N+2)-D. (optional) `float` or `double` tensor, the covariances + of the distribution(s). The first `N+1` dimensions must match + those of `mu`. Must be batch-positive-definite. +* `sigma_chol`: (N+2)-D. (optional) `float` or `double` tensor, a + lower-triangular factorization of `sigma` + (`sigma = sigma_chol . sigma_chol^*`). The first `N+1` dimensions + must match those of `mu`. The tensor itself need not be batch + lower triangular: we ignore the upper triangular part. However, + the batch diagonals must be positive (i.e., sigma_chol must be + batch-positive-definite). +* `name`: The name to give Ops created by the initializer. + +##### Raises: + + +* `ValueError`: if neither sigma nor sigma_chol is provided. +* `TypeError`: if mu and sigma (resp. sigma_chol) are different dtypes. + + +- - - + +#### `tf.contrib.distributions.MultivariateNormal.dtype` {#MultivariateNormal.dtype} + + + + +- - - + +#### `tf.contrib.distributions.MultivariateNormal.entropy(name=None)` {#MultivariateNormal.entropy} + +The entropies of these Multivariate Normals. + +##### Args: + + +* `name`: The name to give this op. + +##### Returns: + + +* `entropy`: tensor of dtype `dtype`, the entropies. + + +- - - + +#### `tf.contrib.distributions.MultivariateNormal.is_reparameterized` {#MultivariateNormal.is_reparameterized} + + + + +- - - + +#### `tf.contrib.distributions.MultivariateNormal.log_pdf(x, name=None)` {#MultivariateNormal.log_pdf} + +Log pdf of observations `x` given these Multivariate Normals. + +##### Args: + + +* `x`: tensor of dtype `dtype`, must be broadcastable with `mu`. +* `name`: The name to give this op. + +##### Returns: + + +* `log_pdf`: tensor of dtype `dtype`, the log-PDFs of `x`. + + +- - - + +#### `tf.contrib.distributions.MultivariateNormal.mean` {#MultivariateNormal.mean} + + + + +- - - + +#### `tf.contrib.distributions.MultivariateNormal.mu` {#MultivariateNormal.mu} + + + + +- - - + +#### `tf.contrib.distributions.MultivariateNormal.pdf(x, name=None)` {#MultivariateNormal.pdf} + +The PDF of observations `x` under these Multivariate Normals. + +##### Args: + + +* `x`: tensor of dtype `dtype`, must be broadcastable with `mu` and `sigma`. +* `name`: The name to give this op. + +##### Returns: + + +* `pdf`: tensor of dtype `dtype`, the pdf values of `x`. + + +- - - + +#### `tf.contrib.distributions.MultivariateNormal.sample(n, seed=None, name=None)` {#MultivariateNormal.sample} + +Sample `n` observations from the Multivariate Normal Distributions. + +##### Args: + + +* `n`: `Scalar`, type int32, the number of observations to sample. +* `seed`: Python integer, the random seed. +* `name`: The name to give this op. + +##### Returns: + + +* `samples`: `[n, ...]`, a `Tensor` of `n` samples for each + of the distributions determined by broadcasting the hyperparameters. + + +- - - + +#### `tf.contrib.distributions.MultivariateNormal.sigma` {#MultivariateNormal.sigma} + + + + +- - - + +#### `tf.contrib.distributions.MultivariateNormal.sigma_det` {#MultivariateNormal.sigma_det} + + + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.fully_connected.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.fully_connected.md deleted file mode 100644 index da63a14cd9..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.fully_connected.md +++ /dev/null @@ -1,46 +0,0 @@ -### `tf.contrib.layers.fully_connected(*args, **kwargs)` {#fully_connected} - -Adds a fully connected layer. - -`fully_connected` creates a variable called `weights`, representing a fully -connected weight matrix, which is multiplied by the `inputs` to produce a -`Tensor` of hidden units. If a `normalizer_fn` is provided (such as -`batch_norm`), it is then applied. Otherwise, if `normalizer_fn` is -None and a `biases_initializer` is provided then a `biases` variable would be -created and added the hidden units. Finally, if `activation_fn` is not `None`, -it is applied to the hidden units as well. - -Note: that if `inputs` have a rank greater than 2, then `inputs` is flattened -prior to the initial matrix multiply by `weights`. - -##### Args: - - -* `inputs`: A tensor of with at least rank 2 and value for the last dimension, - i.e. `[batch_size, depth]`, `[None, None, None, channels]`. -* `num_outputs`: Integer, the number of output units in the layer. -* `activation_fn`: activation function. -* `normalizer_fn`: normalization function to use instead of `biases`. If - `normalize_fn` is provided then `biases_initializer` and - `biases_regularizer` are ignored and `biases` are not created nor added. -* `normalizer_params`: normalization function parameters. -* `weights_initializer`: An initializer for the weights. -* `weights_regularizer`: Optional regularizer for the weights. -* `biases_initializer`: An initializer for the biases. If None skip biases. -* `biases_regularizer`: Optional regularizer for the biases. -* `reuse`: whether or not the layer and its variables should be reused. To be - able to reuse the layer scope must be given. -* `variables_collections`: Optional list of collections for all the variables or - a dictionay containing a different list of collection per variable. -* `outputs_collections`: collection to add the outputs. -* `scope`: Optional scope for variable_op_scope. - -##### Returns: - - the tensor variable representing the result of the series of operations. - -##### Raises: - - -* `ValueError`: if x has rank less than 2 or if its last dimension is not set. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.summarize_activations.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.summarize_activations.md new file mode 100644 index 0000000000..dc2e7a6044 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.summarize_activations.md @@ -0,0 +1,4 @@ +### `tf.contrib.layers.summarize_activations(name_filter=None, summarizer=summarize_activation)` {#summarize_activations} + +Summarize activations, using `summarize_activation` to summarize. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.xavier_initializer_conv2d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.xavier_initializer_conv2d.md new file mode 100644 index 0000000000..9deeb48b5b --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.xavier_initializer_conv2d.md @@ -0,0 +1,29 @@ +### `tf.contrib.layers.xavier_initializer_conv2d(uniform=True, seed=None, dtype=tf.float32)` {#xavier_initializer_conv2d} + +Returns an initializer performing "Xavier" initialization for weights. + +This function implements the weight initialization from: + +Xavier Glorot and Yoshua Bengio (2010): + Understanding the difficulty of training deep feedforward neural + networks. International conference on artificial intelligence and + statistics. + +This initializer is designed to keep the scale of the gradients roughly the +same in all layers. In uniform distribution this ends up being the range: +`x = sqrt(6. / (in + out)); [-x, x]` and for normal distribution a standard +deviation of `sqrt(3. / (in + out))` is used. + +##### Args: + + +* `uniform`: Whether to use uniform or normal distributed random initialization. +* `seed`: A Python integer. Used to create random seeds. See + [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) + for behavior. +* `dtype`: The data type. Only floating point types are supported. + +##### Returns: + + An initializer for a weight matrix. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.RunConfig.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.RunConfig.md deleted file mode 100644 index ffdf8703c0..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.RunConfig.md +++ /dev/null @@ -1,47 +0,0 @@ -This class specifies the specific configurations for the run. - -Parameters: - execution_mode: Runners use this flag to execute different tasks, like - training vs evaluation. 'all' (the default) executes both training and - eval. - master: TensorFlow master. Empty string (the default) for local. - task: Task id of the replica running the training (default: 0). - num_ps_replicas: Number of parameter server tasks to use (default: 0). - training_worker_session_startup_stagger_secs: Seconds to sleep between the - startup of each worker task session (default: 5). - training_worker_max_startup_secs: Max seconds to wait before starting any - worker (default: 60). - eval_delay_secs: Number of seconds between the beginning of each eval run. - If one run takes more than this amount of time, the next run will start - immediately once that run completes (default 60). - eval_steps: Number of steps to run in each eval (default: 100). - num_cores: Number of cores to be used (default: 4). - verbose: Controls the verbosity, possible values: - 0: the algorithm and debug information is muted. - 1: trainer prints the progress. - 2: log device placement is printed. - gpu_memory_fraction: Fraction of GPU memory used by the process on - each GPU uniformly on the same machine. - tf_random_seed: Random seed for TensorFlow initializers. - Setting this value allows consistency between reruns. - keep_checkpoint_max: The maximum number of recent checkpoint files to keep. - As new files are created, older files are deleted. - If None or 0, all checkpoint files are kept. - Defaults to 5 (that is, the 5 most recent checkpoint files are kept.) - keep_checkpoint_every_n_hours: Number of hours between each checkpoint - to be saved. The default value of 10,000 hours effectively disables - the feature. - -Attributes: - tf_master: Tensorflow master. - tf_config: Tensorflow Session Config proto. - tf_random_seed: Tensorflow random seed. - keep_checkpoint_max: Maximum number of checkpoints to keep. - keep_checkpoint_every_n_hours: Number of hours between each checkpoint. -- - - - -#### `tf.contrib.learn.RunConfig.__init__(execution_mode='all', master='', task=0, num_ps_replicas=0, training_worker_session_startup_stagger_secs=5, training_worker_max_startup_secs=60, eval_delay_secs=60, eval_steps=100, num_cores=4, verbose=1, gpu_memory_fraction=1, tf_random_seed=42, keep_checkpoint_max=5, keep_checkpoint_every_n_hours=10000)` {#RunConfig.__init__} - - - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.TensorFlowDNNClassifier.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.TensorFlowDNNClassifier.md new file mode 100644 index 0000000000..03c779259a --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.TensorFlowDNNClassifier.md @@ -0,0 +1,302 @@ +TensorFlow DNN Classifier model. + +Parameters: + hidden_units: List of hidden units per layer. + n_classes: Number of classes in the target. + batch_size: Mini batch size. + steps: Number of steps to run over data. + optimizer: Optimizer name (or class), for example "SGD", "Adam", "Adagrad". + learning_rate: If this is constant float value, no decay function is used. + Instead, a customized decay function can be passed that accepts + global_step as parameter and returns a Tensor. + e.g. exponential decay function: + def exp_decay(global_step): + return tf.train.exponential_decay( + learning_rate=0.1, global_step, + decay_steps=2, decay_rate=0.001) + class_weight: None or list of n_classes floats. Weight associated with + classes for loss computation. If not given, all classes are + supposed to have weight one. + continue_training: when continue_training is True, once initialized + model will be continuely trained on every call of fit. + config: RunConfig object that controls the configurations of the + session, e.g. num_cores, gpu_memory_fraction, etc. + dropout: When not None, the probability we will drop out a given coordinate. +- - - + +#### `tf.contrib.learn.TensorFlowDNNClassifier.__init__(hidden_units, n_classes, batch_size=32, steps=200, optimizer='Adagrad', learning_rate=0.1, class_weight=None, clip_gradients=5.0, continue_training=False, config=None, verbose=1, dropout=None)` {#TensorFlowDNNClassifier.__init__} + + + + +- - - + +#### `tf.contrib.learn.TensorFlowDNNClassifier.bias_` {#TensorFlowDNNClassifier.bias_} + +Returns bias of the DNN's bias layers. + + +- - - + +#### `tf.contrib.learn.TensorFlowDNNClassifier.evaluate(x=None, y=None, input_fn=None, steps=None)` {#TensorFlowDNNClassifier.evaluate} + +See base class. + + +- - - + +#### `tf.contrib.learn.TensorFlowDNNClassifier.fit(x, y, steps=None, monitors=None, logdir=None)` {#TensorFlowDNNClassifier.fit} + +Builds a neural network model given provided `model_fn` and training +data X and y. + +Note: called first time constructs the graph and initializers +variables. Consecutives times it will continue training the same model. +This logic follows partial_fit() interface in scikit-learn. + +To restart learning, create new estimator. + +##### Args: + + +* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be + iterator that returns arrays of features. The training input + samples for fitting the model. + +* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be + iterator that returns array of targets. The training target values + (class labels in classification, real numbers in regression). + +* `steps`: int, number of steps to train. + If None or 0, train for `self.steps`. +* `monitors`: List of `BaseMonitor` objects to print training progress and + invoke early stopping. +* `logdir`: the directory to save the log file that can be used for + optional visualization. + +##### Returns: + + Returns self. + + +- - - + +#### `tf.contrib.learn.TensorFlowDNNClassifier.get_params(deep=True)` {#TensorFlowDNNClassifier.get_params} + +Get parameters for this estimator. + +##### Args: + + +* `deep`: boolean, optional + If True, will return the parameters for this estimator and + contained subobjects that are estimators. + +##### Returns: + + params : mapping of string to any + Parameter names mapped to their values. + + +- - - + +#### `tf.contrib.learn.TensorFlowDNNClassifier.get_tensor(name)` {#TensorFlowDNNClassifier.get_tensor} + +Returns tensor by name. + +##### Args: + + +* `name`: string, name of the tensor. + +##### Returns: + + Tensor. + + +- - - + +#### `tf.contrib.learn.TensorFlowDNNClassifier.get_tensor_value(name)` {#TensorFlowDNNClassifier.get_tensor_value} + +Returns value of the tensor give by name. + +##### Args: + + +* `name`: string, name of the tensor. + +##### Returns: + + Numpy array - value of the tensor. + + +- - - + +#### `tf.contrib.learn.TensorFlowDNNClassifier.get_variable_names()` {#TensorFlowDNNClassifier.get_variable_names} + +Returns list of all variable names in this model. + +##### Returns: + + List of names. + + +- - - + +#### `tf.contrib.learn.TensorFlowDNNClassifier.model_dir` {#TensorFlowDNNClassifier.model_dir} + + + + +- - - + +#### `tf.contrib.learn.TensorFlowDNNClassifier.partial_fit(x, y)` {#TensorFlowDNNClassifier.partial_fit} + +Incremental fit on a batch of samples. + +This method is expected to be called several times consecutively +on different or the same chunks of the dataset. This either can +implement iterative training or out-of-core/online training. + +This is especially useful when the whole dataset is too big to +fit in memory at the same time. Or when model is taking long time +to converge, and you want to split up training into subparts. + +##### Args: + + +* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be + iterator that returns arrays of features. The training input + samples for fitting the model. + +* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be + iterator that returns array of targets. The training target values + (class label in classification, real numbers in regression). + +##### Returns: + + Returns self. + + +- - - + +#### `tf.contrib.learn.TensorFlowDNNClassifier.predict(x, axis=1, batch_size=None)` {#TensorFlowDNNClassifier.predict} + +Predict class or regression for X. + +For a classification model, the predicted class for each sample in X is +returned. For a regression model, the predicted value based on X is +returned. + +##### Args: + + +* `x`: array-like matrix, [n_samples, n_features...] or iterator. +* `axis`: Which axis to argmax for classification. + By default axis 1 (next after batch) is used. + Use 2 for sequence predictions. +* `batch_size`: If test set is too big, use batch size to split + it into mini batches. By default the batch_size member + variable is used. + +##### Returns: + + +* `y`: array of shape [n_samples]. The predicted classes or predicted + value. + + +- - - + +#### `tf.contrib.learn.TensorFlowDNNClassifier.predict_proba(x, batch_size=None)` {#TensorFlowDNNClassifier.predict_proba} + +Predict class probability of the input samples X. + +##### Args: + + +* `x`: array-like matrix, [n_samples, n_features...] or iterator. +* `batch_size`: If test set is too big, use batch size to split + it into mini batches. By default the batch_size member variable is used. + +##### Returns: + + +* `y`: array of shape [n_samples, n_classes]. The predicted + probabilities for each class. + + +- - - + +#### `tf.contrib.learn.TensorFlowDNNClassifier.restore(cls, path, config=None)` {#TensorFlowDNNClassifier.restore} + +Restores model from give path. + +##### Args: + + +* `path`: Path to the checkpoints and other model information. +* `config`: RunConfig object that controls the configurations of the session, + e.g. num_cores, gpu_memory_fraction, etc. This is allowed to be + reconfigured. + +##### Returns: + + Estimator, object of the subclass of TensorFlowEstimator. + + +- - - + +#### `tf.contrib.learn.TensorFlowDNNClassifier.save(path)` {#TensorFlowDNNClassifier.save} + +Saves checkpoints and graph to given path. + +##### Args: + + +* `path`: Folder to save model to. + + +- - - + +#### `tf.contrib.learn.TensorFlowDNNClassifier.set_params(**params)` {#TensorFlowDNNClassifier.set_params} + +Set the parameters of this estimator. + +The method works on simple estimators as well as on nested objects +(such as pipelines). The former have parameters of the form +``__`` so that it's possible to update each +component of a nested object. + +##### Returns: + + self + + +- - - + +#### `tf.contrib.learn.TensorFlowDNNClassifier.train(input_fn, steps, monitors=None)` {#TensorFlowDNNClassifier.train} + +Trains a model given input builder function. + +##### Args: + + +* `input_fn`: Input builder function, returns tuple of dicts or + dict and Tensor. +* `steps`: number of steps to train model for. +* `monitors`: List of `BaseMonitor` subclass instances. Used for callbacks + inside the training loop. + +##### Returns: + + Returns self. + + +- - - + +#### `tf.contrib.learn.TensorFlowDNNClassifier.weights_` {#TensorFlowDNNClassifier.weights_} + +Returns weights of the DNN weight layers. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.TensorFlowLinearClassifier.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.TensorFlowLinearClassifier.md new file mode 100644 index 0000000000..469aa72b3a --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.TensorFlowLinearClassifier.md @@ -0,0 +1,279 @@ +TensorFlow Linear Classifier model. +- - - + +#### `tf.contrib.learn.TensorFlowLinearClassifier.__init__(n_classes, batch_size=32, steps=200, optimizer='Adagrad', learning_rate=0.1, class_weight=None, clip_gradients=5.0, continue_training=False, config=None, verbose=1)` {#TensorFlowLinearClassifier.__init__} + + + + +- - - + +#### `tf.contrib.learn.TensorFlowLinearClassifier.bias_` {#TensorFlowLinearClassifier.bias_} + +Returns weights of the linear classifier. + + +- - - + +#### `tf.contrib.learn.TensorFlowLinearClassifier.evaluate(x=None, y=None, input_fn=None, steps=None)` {#TensorFlowLinearClassifier.evaluate} + +See base class. + + +- - - + +#### `tf.contrib.learn.TensorFlowLinearClassifier.fit(x, y, steps=None, monitors=None, logdir=None)` {#TensorFlowLinearClassifier.fit} + +Builds a neural network model given provided `model_fn` and training +data X and y. + +Note: called first time constructs the graph and initializers +variables. Consecutives times it will continue training the same model. +This logic follows partial_fit() interface in scikit-learn. + +To restart learning, create new estimator. + +##### Args: + + +* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be + iterator that returns arrays of features. The training input + samples for fitting the model. + +* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be + iterator that returns array of targets. The training target values + (class labels in classification, real numbers in regression). + +* `steps`: int, number of steps to train. + If None or 0, train for `self.steps`. +* `monitors`: List of `BaseMonitor` objects to print training progress and + invoke early stopping. +* `logdir`: the directory to save the log file that can be used for + optional visualization. + +##### Returns: + + Returns self. + + +- - - + +#### `tf.contrib.learn.TensorFlowLinearClassifier.get_params(deep=True)` {#TensorFlowLinearClassifier.get_params} + +Get parameters for this estimator. + +##### Args: + + +* `deep`: boolean, optional + If True, will return the parameters for this estimator and + contained subobjects that are estimators. + +##### Returns: + + params : mapping of string to any + Parameter names mapped to their values. + + +- - - + +#### `tf.contrib.learn.TensorFlowLinearClassifier.get_tensor(name)` {#TensorFlowLinearClassifier.get_tensor} + +Returns tensor by name. + +##### Args: + + +* `name`: string, name of the tensor. + +##### Returns: + + Tensor. + + +- - - + +#### `tf.contrib.learn.TensorFlowLinearClassifier.get_tensor_value(name)` {#TensorFlowLinearClassifier.get_tensor_value} + +Returns value of the tensor give by name. + +##### Args: + + +* `name`: string, name of the tensor. + +##### Returns: + + Numpy array - value of the tensor. + + +- - - + +#### `tf.contrib.learn.TensorFlowLinearClassifier.get_variable_names()` {#TensorFlowLinearClassifier.get_variable_names} + +Returns list of all variable names in this model. + +##### Returns: + + List of names. + + +- - - + +#### `tf.contrib.learn.TensorFlowLinearClassifier.model_dir` {#TensorFlowLinearClassifier.model_dir} + + + + +- - - + +#### `tf.contrib.learn.TensorFlowLinearClassifier.partial_fit(x, y)` {#TensorFlowLinearClassifier.partial_fit} + +Incremental fit on a batch of samples. + +This method is expected to be called several times consecutively +on different or the same chunks of the dataset. This either can +implement iterative training or out-of-core/online training. + +This is especially useful when the whole dataset is too big to +fit in memory at the same time. Or when model is taking long time +to converge, and you want to split up training into subparts. + +##### Args: + + +* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be + iterator that returns arrays of features. The training input + samples for fitting the model. + +* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be + iterator that returns array of targets. The training target values + (class label in classification, real numbers in regression). + +##### Returns: + + Returns self. + + +- - - + +#### `tf.contrib.learn.TensorFlowLinearClassifier.predict(x, axis=1, batch_size=None)` {#TensorFlowLinearClassifier.predict} + +Predict class or regression for X. + +For a classification model, the predicted class for each sample in X is +returned. For a regression model, the predicted value based on X is +returned. + +##### Args: + + +* `x`: array-like matrix, [n_samples, n_features...] or iterator. +* `axis`: Which axis to argmax for classification. + By default axis 1 (next after batch) is used. + Use 2 for sequence predictions. +* `batch_size`: If test set is too big, use batch size to split + it into mini batches. By default the batch_size member + variable is used. + +##### Returns: + + +* `y`: array of shape [n_samples]. The predicted classes or predicted + value. + + +- - - + +#### `tf.contrib.learn.TensorFlowLinearClassifier.predict_proba(x, batch_size=None)` {#TensorFlowLinearClassifier.predict_proba} + +Predict class probability of the input samples X. + +##### Args: + + +* `x`: array-like matrix, [n_samples, n_features...] or iterator. +* `batch_size`: If test set is too big, use batch size to split + it into mini batches. By default the batch_size member variable is used. + +##### Returns: + + +* `y`: array of shape [n_samples, n_classes]. The predicted + probabilities for each class. + + +- - - + +#### `tf.contrib.learn.TensorFlowLinearClassifier.restore(cls, path, config=None)` {#TensorFlowLinearClassifier.restore} + +Restores model from give path. + +##### Args: + + +* `path`: Path to the checkpoints and other model information. +* `config`: RunConfig object that controls the configurations of the session, + e.g. num_cores, gpu_memory_fraction, etc. This is allowed to be + reconfigured. + +##### Returns: + + Estimator, object of the subclass of TensorFlowEstimator. + + +- - - + +#### `tf.contrib.learn.TensorFlowLinearClassifier.save(path)` {#TensorFlowLinearClassifier.save} + +Saves checkpoints and graph to given path. + +##### Args: + + +* `path`: Folder to save model to. + + +- - - + +#### `tf.contrib.learn.TensorFlowLinearClassifier.set_params(**params)` {#TensorFlowLinearClassifier.set_params} + +Set the parameters of this estimator. + +The method works on simple estimators as well as on nested objects +(such as pipelines). The former have parameters of the form +``__`` so that it's possible to update each +component of a nested object. + +##### Returns: + + self + + +- - - + +#### `tf.contrib.learn.TensorFlowLinearClassifier.train(input_fn, steps, monitors=None)` {#TensorFlowLinearClassifier.train} + +Trains a model given input builder function. + +##### Args: + + +* `input_fn`: Input builder function, returns tuple of dicts or + dict and Tensor. +* `steps`: number of steps to train model for. +* `monitors`: List of `BaseMonitor` subclass instances. Used for callbacks + inside the training loop. + +##### Returns: + + Returns self. + + +- - - + +#### `tf.contrib.learn.TensorFlowLinearClassifier.weights_` {#TensorFlowLinearClassifier.weights_} + +Returns weights of the linear classifier. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.extract_pandas_matrix.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.extract_pandas_matrix.md deleted file mode 100644 index c2a275bc66..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.extract_pandas_matrix.md +++ /dev/null @@ -1,4 +0,0 @@ -### `tf.contrib.learn.extract_pandas_matrix(data)` {#extract_pandas_matrix} - -Extracts numpy matrix from pandas DataFrame. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.infer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.infer.md new file mode 100644 index 0000000000..616e74f3a4 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.infer.md @@ -0,0 +1,4 @@ +### `tf.contrib.learn.infer(restore_checkpoint_path, output_dict, feed_dict=None)` {#infer} + + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.read_batch_record_features.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.read_batch_record_features.md new file mode 100644 index 0000000000..aa4e964be1 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.read_batch_record_features.md @@ -0,0 +1,33 @@ +### `tf.contrib.learn.read_batch_record_features(file_pattern, batch_size, features, randomize_input=True, num_epochs=None, queue_capacity=10000, reader_num_threads=1, parser_num_threads=1, name='dequeue_record_examples')` {#read_batch_record_features} + +Reads TFRecord, queues, batches and parses `Example` proto. + +See more detailed description in `read_examples`. + +##### Args: + + +* `file_pattern`: List of files or pattern of file paths containing + `Example` records. See `tf.gfile.Glob` for pattern rules. +* `batch_size`: An int or scalar `Tensor` specifying the batch size to use. +* `features`: A `dict` mapping feature keys to `FixedLenFeature` or + `VarLenFeature` values. +* `randomize_input`: Whether the input should be randomized. +* `num_epochs`: Integer specifying the number of times to read through the + dataset. If None, cycles through the dataset forever. NOTE - If specified, + creates a variable that must be initialized, so call + tf.initialize_all_variables() as shown in the tests. +* `queue_capacity`: Capacity for input queue. +* `reader_num_threads`: The number of threads to read examples. +* `parser_num_threads`: The number of threads to parse examples. +* `name`: Name of resulting op. + +##### Returns: + + A dict of `Tensor` or `SparseTensor` objects for each in `features`. + +##### Raises: + + +* `ValueError`: for invalid inputs. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.set_difference.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.set_difference.md new file mode 100644 index 0000000000..452115a428 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.set_difference.md @@ -0,0 +1,24 @@ +### `tf.contrib.metrics.set_difference(a, b, aminusb=True, validate_indices=True)` {#set_difference} + +Compute set difference of elements in last dimension of `a` and `b`. + +All but the last dimension of `a` and `b` must match. + +##### Args: + + +* `a`: `Tensor` or `SparseTensor` of the same type as `b`. If sparse, indices + must be sorted in row-major order. +* `b`: `Tensor` or `SparseTensor` of the same type as `a`. Must be + `SparseTensor` if `a` is `SparseTensor`. If sparse, indices must be + sorted in row-major order. +* `aminusb`: Whether to subtract `b` from `a`, vs vice versa. +* `validate_indices`: Whether to validate the order and range of sparse indices + in `a` and `b`. + +##### Returns: + + A `SparseTensor` with the same rank as `a` and `b`, and all but the last + dimension the same. Elements along the last dimension contain the + differences. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.streaming_mean_relative_error.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.streaming_mean_relative_error.md deleted file mode 100644 index 3740bbbaad..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.streaming_mean_relative_error.md +++ /dev/null @@ -1,49 +0,0 @@ -### `tf.contrib.metrics.streaming_mean_relative_error(predictions, labels, normalizer, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_mean_relative_error} - -Computes the mean relative error by normalizing with the given values. - -The `streaming_mean_relative_error` function creates two local variables, -`total` and `count` that are used to compute the mean relative absolute error. -This average is ultimately returned as `mean_relative_error`: an idempotent -operation that simply divides `total` by `count`. To facilitate the estimation -of the mean relative error over a stream of data, the function utilizes two -operations. First, a `relative_errors` operation divides the absolute value -of the differences between `predictions` and `labels` by the `normalizer`. -Second, an `update_op` operation whose behavior is dependent on the value of -`weights`. If `weights` is None, then `update_op` increments `total` with the -reduced sum of `relative_errors` and increments `count` with the number of -elements in `relative_errors`. If `weights` is not `None`, then `update_op` -increments `total` with the reduced sum of the product of `weights` and -`relative_errors` and increments `count` with the reduced sum of `weights`. In -addition to performing the updates, `update_op` also returns the -`mean_relative_error` value. - -##### Args: - - -* `predictions`: A `Tensor` of arbitrary shape. -* `labels`: A `Tensor` of the same shape as `predictions`. -* `normalizer`: A `Tensor` of the same shape as `predictions`. -* `weights`: An optional set of weights of the same shape as `predictions`. If - `weights` is not None, the function computes a weighted mean. -* `metrics_collections`: An optional list of collections that - `mean_relative_error` should be added to. -* `updates_collections`: An optional list of collections that `update_op` should - be added to. -* `name`: An optional variable_op_scope name. - -##### Returns: - - -* `mean_relative_error`: A tensor representing the current mean, the value of - `total` divided by `count`. -* `update_op`: An operation that increments the `total` and `count` variables - appropriately and whose value matches `mean_relative_error`. - -##### Raises: - - -* `ValueError`: If `weights` is not `None` and its shape doesn't match - `predictions` or if either `metrics_collections` or `updates_collections` - are not a list or tuple. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.streaming_recall_at_k.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.streaming_recall_at_k.md deleted file mode 100644 index dd03b95b69..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.streaming_recall_at_k.md +++ /dev/null @@ -1,52 +0,0 @@ -### `tf.contrib.metrics.streaming_recall_at_k(predictions, labels, k, ignore_mask=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_recall_at_k} - -Computes the recall@k of the predictions with respect to dense labels. - -The `streaming_recall_at_k` function creates two local variables, `total` and -`count`, that are used to compute the recall@k frequency. This frequency is -ultimately returned as `recall_at_`: an idempotent operation that simply -divides `total` by `count`. To facilitate the estimation of recall@k over a -stream of data, the function utilizes two operations. First, an `in_top_k` -operation computes a tensor with shape [batch_size] whose elements indicate -whether or not the corresponding label is in the top `k` predictions of the -`predictions` `Tensor`. Second, an `update_op` operation whose behavior is -dependent on the value of `ignore_mask`. If `ignore_mask` is None, then -`update_op` increments `total` with the number of elements of `in_top_k` that -are set to `True` and increments `count` with the batch size. If `ignore_mask` -is not `None`, then `update_op` increments `total` with the number of elements -in `in_top_k` that are `True` whose corresponding element in `ignore_mask` is -`False`. In addition to performing the updates, `update_op` also returns the -recall value. - -##### Args: - - -* `predictions`: A floating point tensor of dimension [batch_size, num_classes] -* `labels`: A tensor of dimension [batch_size] whose type is in `int32`, - `int64`. -* `k`: The number of top elements to look at for computing precision. -* `ignore_mask`: An optional, binary tensor whose size matches `labels`. If an - element of `ignore_mask` is True, the corresponding prediction and label - pair is used to compute the metrics. Otherwise, the pair is ignored. -* `metrics_collections`: An optional list of collections that `recall_at_k` - should be added to. -* `updates_collections`: An optional list of collections `update_op` should be - added to. -* `name`: An optional variable_op_scope name. - -##### Returns: - - -* `recall_at_k`: A tensor representing the recall@k, the fraction of labels - which fall into the top `k` predictions. -* `update_op`: An operation that increments the `total` and `count` variables - appropriately and whose value matches `recall_at_k`. - -##### Raises: - - -* `ValueError`: If the dimensions of `predictions` and `labels` don't match or - if `ignore_mask` is not `None` and its shape doesn't match `predictions` - or if either `metrics_collections` or `updates_collections` are not a list - or tuple. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.decode_csv.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.decode_csv.md deleted file mode 100644 index f2ebf6945b..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.decode_csv.md +++ /dev/null @@ -1,26 +0,0 @@ -### `tf.decode_csv(records, record_defaults, field_delim=None, name=None)` {#decode_csv} - -Convert CSV records to tensors. Each column maps to one tensor. - -RFC 4180 format is expected for the CSV records. -(https://tools.ietf.org/html/rfc4180) -Note that we allow leading and trailing spaces with int or float field. - -##### Args: - - -* `records`: A `Tensor` of type `string`. - Each string is a record/row in the csv and all records should have - the same format. -* `record_defaults`: A list of `Tensor` objects with types from: `float32`, `int32`, `int64`, `string`. - One tensor per column of the input record, with either a - scalar default value for that column or empty if the column is required. -* `field_delim`: An optional `string`. Defaults to `","`. - delimiter to separate fields in a record. -* `name`: A name for the operation (optional). - -##### Returns: - - A list of `Tensor` objects. Has the same type as `record_defaults`. - Each tensor will have the same shape as records. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.depth_to_space.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.depth_to_space.md new file mode 100644 index 0000000000..c0117c82c7 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.depth_to_space.md @@ -0,0 +1,95 @@ +### `tf.depth_to_space(input, block_size, name=None)` {#depth_to_space} + +DepthToSpace for tensors of type T. + +Rearranges data from depth into blocks of spatial data. +This is the reverse transformation of SpaceToDepth. More specifically, +this op outputs a copy of the input tensor where values from the `depth` +dimension are moved in spatial blocks to the `height` and `width` dimensions. +The attr `block_size` indicates the input block size and how the data is moved. + + * Chunks of data of size `block_size * block_size` from depth are rearranged + into non-overlapping blocks of size `block_size x block_size` + * The width the output tensor is `input_depth * block_size`, whereas the + height is `input_height * block_size`. + * The depth of the input tensor must be divisible by + `block_size * block_size`. + +That is, assuming the input is in the shape: +`[batch, height, width, depth]`, +the shape of the output will be: +`[batch, height*block_size, width*block_size, depth/(block_size*block_size)]` + +This operation requires that the input tensor be of rank 4, and that +`block_size` be >=1 and that `block_size * block_size` be a divisor of the +input depth. + +This operation is useful for resizing the activations between convolutions +(but keeping all data), e.g. instead of pooling. It is also useful for training +purely convolutional models. + +For example, given this input of shape `[1, 1, 1, 4]`, and a block size of 2: + +```prettyprint +x = [[[[1, 2, 3, 4]]]] + +``` + +This operation will output a tensor of shape `[1, 2, 2, 1]`: + +```prettyprint + [[[[1], [2]], + [[3], [4]]]] +``` + +Here, the input has a batch of 1 and each batch element has shape `[1, 1, 4]`, +the corresponding output will have 2x2 elements and will have a depth of +1 channel (1 = `4 / (block_size * block_size)`). +The output element shape is `[2, 2, 1]`. + +For an input tensor with larger depth, here of shape `[1, 1, 1, 12]`, e.g. + +```prettyprint +x = [[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]] +``` + +This operation, for block size of 2, will return the following tensor of shape +`[1, 2, 2, 3]` + +```prettyprint + [[[[1, 2, 3], [4, 5, 6]], + [[7, 8, 9], [10, 11, 12]]]] + +``` + +Similarly, for the following input of shape `[1 2 2 4]`, and a block size of 2: + +```prettyprint +x = [[[[1, 2, 3, 4], + [5, 6, 7, 8]], + [[9, 10, 11, 12], + [13, 14, 15, 16]]]] +``` + +the operator will return the following tensor of shape `[1 4 4 1]`: + +```prettyprint +x = [[ [1], [2], [5], [6]], + [ [3], [4], [7], [8]], + [ [9], [10], [13], [14]], + [ [11], [12], [15], [16]]] + +``` + +##### Args: + + +* `input`: A `Tensor`. +* `block_size`: An `int`. + The size of the spatial block, same as in Space2Depth. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `input`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.erf.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.erf.md new file mode 100644 index 0000000000..3a425b7c4a --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.erf.md @@ -0,0 +1,14 @@ +### `tf.erf(x, name=None)` {#erf} + +Computes the Gauss error function of `x` element-wise. + +##### Args: + + +* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `x`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.errors.InternalError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.errors.InternalError.md new file mode 100644 index 0000000000..dd229d2a3d --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.errors.InternalError.md @@ -0,0 +1,12 @@ +Raised when the system experiences an internal error. + +This exception is raised when some invariant expected by the runtime +has been broken. Catching this exception is not recommended. + +- - - + +#### `tf.errors.InternalError.__init__(node_def, op, message)` {#InternalError.__init__} + +Creates an `InternalError`. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.errors.UnauthenticatedError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.errors.UnauthenticatedError.md new file mode 100644 index 0000000000..d3344dc6b1 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.errors.UnauthenticatedError.md @@ -0,0 +1,11 @@ +The request does not have valid authentication credentials. + +This exception is not currently used. + +- - - + +#### `tf.errors.UnauthenticatedError.__init__(node_def, op, message)` {#UnauthenticatedError.__init__} + +Creates an `UnauthenticatedError`. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.errors.UnknownError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.errors.UnknownError.md new file mode 100644 index 0000000000..3e18ec866b --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.errors.UnknownError.md @@ -0,0 +1,15 @@ +Unknown error. + +An example of where this error may be returned is if a Status value +received from another address space belongs to an error-space that +is not known to this address space. Also errors raised by APIs that +do not return enough error information may be converted to this +error. + +- - - + +#### `tf.errors.UnknownError.__init__(node_def, op, message, error_code=2)` {#UnknownError.__init__} + +Creates an `UnknownError`. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.exp.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.exp.md new file mode 100644 index 0000000000..6bceeabd27 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.exp.md @@ -0,0 +1,14 @@ +### `tf.exp(x, name=None)` {#exp} + +Computes exponential of x element-wise. \\(y = e^x\\). + +##### Args: + + +* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `x`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.fft2d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.fft2d.md deleted file mode 100644 index e480dcb27e..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.fft2d.md +++ /dev/null @@ -1,14 +0,0 @@ -### `tf.fft2d(input, name=None)` {#fft2d} - -Compute the 2-dimensional discrete Fourier Transform. - -##### Args: - - -* `input`: A `Tensor` of type `complex64`. A complex64 matrix. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `complex64`. The 2D Fourier Transform of `input`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.floor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.floor.md deleted file mode 100644 index 4aadcff6ef..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.floor.md +++ /dev/null @@ -1,14 +0,0 @@ -### `tf.floor(x, name=None)` {#floor} - -Returns element-wise largest integer not greater than x. - -##### Args: - - -* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `x`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.floordiv.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.floordiv.md deleted file mode 100644 index 8f824e867e..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.floordiv.md +++ /dev/null @@ -1,32 +0,0 @@ -### `tf.floordiv(x, y, name=None)` {#floordiv} - -Divides `x / y` elementwise, rounding down for floating point. - -The same as `tf.div(x,y)` for integers, but uses `tf.floor(tf.div(x,y))` for -floating point arguments so that the result is always an integer (though -possibly an integer represented as floating point). This op is generated by -`x // y` floor division in Python 3 and in Python 2.7 with -`from __future__ import division`. - -Note that for efficiency, `floordiv` uses C semantics for negative numbers -(unlike Python and Numpy). - -`x` and `y` must have the same type, and the result will have the same type -as well. - -##### Args: - - -* `x`: `Tensor` numerator of real numeric type. -* `y`: `Tensor` denominator of real numeric type. -* `name`: A name for the operation (optional). - -##### Returns: - - `x / y` rounded down (except possibly towards zero for negative integers). - -##### Raises: - - -* `TypeError`: If the inputs are complex. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.gather_nd.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.gather_nd.md deleted file mode 100644 index 7c8777a660..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.gather_nd.md +++ /dev/null @@ -1,30 +0,0 @@ -### `tf.gather_nd(params, indices, name=None)` {#gather_nd} - -Gather values from `params` according to `indices`. - -`indices` must be integer tensor, containing indices into `params`. -It must be shape `[d_0, ..., d_N, R]` where `R` is the rank of `params`. -The innermost dimension of `indices` (with length `R`) corresponds to the -indices of `params`. - -Produces an output tensor with shape `[d_0, ..., d_{n-1}]` where: - - output[i, j, k, ...] = params[indices[i, j, k, ..., :]] - -e.g. for `indices` a matrix: - - output[i] = params[indices[i, :]] - -##### Args: - - -* `params`: A `Tensor`. R-D. The tensor from which to gather values. -* `indices`: A `Tensor`. Must be one of the following types: `int32`, `int64`. - (N+1)-D. Index tensor having shape `[d_0, ..., d_N, R]`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `params`. - N-D. Values from `params` gathered from indices given by `indices`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.get_default_session.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.get_default_session.md new file mode 100644 index 0000000000..c564366e8b --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.get_default_session.md @@ -0,0 +1,16 @@ +### `tf.get_default_session()` {#get_default_session} + +Returns the default session for the current thread. + +The returned `Session` will be the innermost session on which a +`Session` or `Session.as_default()` context has been entered. + +NOTE: The default session is a property of the current thread. If you +create a new thread, and wish to use the default session in that +thread, you must explicitly add a `with sess.as_default():` in that +thread's function. + +##### Returns: + + The default `Session` being used in the current thread. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.get_variable.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.get_variable.md deleted file mode 100644 index 59e1a1797a..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.get_variable.md +++ /dev/null @@ -1,72 +0,0 @@ -### `tf.get_variable(name, shape=None, dtype=tf.float32, initializer=None, regularizer=None, trainable=True, collections=None, caching_device=None, partitioner=None, validate_shape=True)` {#get_variable} - -Gets an existing variable with these parameters or create a new one. - -This function prefixes the name with the current variable scope -and performs reuse checks. See the -[Variable Scope How To](../../how_tos/variable_scope/index.md) -for an extensive description of how reusing works. Here is a basic example: - -```python -with tf.variable_scope("foo"): - v = tf.get_variable("v", [1]) # v.name == "foo/v:0" - w = tf.get_variable("w", [1]) # w.name == "foo/w:0" -with tf.variable_scope("foo", reuse=True) - v1 = tf.get_variable("v") # The same as v above. -``` - -If initializer is `None` (the default), the default initializer passed in -the variable scope will be used. If that one is `None` too, a -`UniformUnitScalingInitializer` will be used. The initializer can also be -a Tensor, in which case the variable is initialized to this value and shape. - -Similarly, if the regularizer is `None` (the default), the default regularizer -passed in the variable scope will be used (if that is `None` too, -then by default no regularization is performed). - -If a partitioner is provided, first a sharded `Variable` is created -via `_get_partitioned_variable_list`, and the return value is a -`Tensor` composed of the shards concatenated along the partition axis. - -Some useful partitioners are available. See, e.g., -`variable_axis_size_partitioner`. - -##### Args: - - -* `name`: The name of the new or existing variable. -* `shape`: Shape of the new or existing variable. -* `dtype`: Type of the new or existing variable (defaults to `DT_FLOAT`). -* `initializer`: Initializer for the variable if one is created. -* `regularizer`: A (Tensor -> Tensor or None) function; the result of - applying it on a newly created variable will be added to the collection - GraphKeys.REGULARIZATION_LOSSES and can be used for regularization. -* `trainable`: If `True` also add the variable to the graph collection - `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable). -* `collections`: List of graph collections keys to add the Variable to. - Defaults to `[GraphKeys.VARIABLES]` (see tf.Variable). - If partitioning is enabled and used, the concatenated return value - is also added to collection `GraphKeys.CONCATENATED_VARIABLES`. -* `caching_device`: Optional device string or function describing where the - Variable should be cached for reading. Defaults to the Variable's - device. If not `None`, caches on another device. Typical use is to - cache on the device where the Ops using the Variable reside, to - deduplicate copying through `Switch` and other conditional statements. -* `partitioner`: Optional callable that accepts a fully defined `TensorShape` - and `dtype` of the Variable to be created, and returns a list of - partitions for each axis (currently only one axis can be partitioned). -* `validate_shape`: If False, allows the variable to be initialized with a - value of unknown shape. If True, the default, the shape of initial_value - must be known. - -##### Returns: - - The created or existing variable. - -##### Raises: - - -* `ValueError`: when creating a new variable and shape is not declared, - or when violating reuse during variable creation. Reuse is set inside - `variable_scope`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.ifft.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.ifft.md deleted file mode 100644 index 26582404f6..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.ifft.md +++ /dev/null @@ -1,15 +0,0 @@ -### `tf.ifft(input, name=None)` {#ifft} - -Compute the inverse 1-dimensional discrete Fourier Transform. - -##### Args: - - -* `input`: A `Tensor` of type `complex64`. A complex64 vector. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `complex64`. - The inverse 1D Fourier Transform of `input`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.image.decode_png.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.image.decode_png.md deleted file mode 100644 index 4332af7704..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.image.decode_png.md +++ /dev/null @@ -1,30 +0,0 @@ -### `tf.image.decode_png(contents, channels=None, dtype=None, name=None)` {#decode_png} - -Decode a PNG-encoded image to a uint8 or uint16 tensor. - -The attr `channels` indicates the desired number of color channels for the -decoded image. - -Accepted values are: - -* 0: Use the number of channels in the PNG-encoded image. -* 1: output a grayscale image. -* 3: output an RGB image. -* 4: output an RGBA image. - -If needed, the PNG-encoded image is transformed to match the requested number -of color channels. - -##### Args: - - -* `contents`: A `Tensor` of type `string`. 0-D. The PNG-encoded image. -* `channels`: An optional `int`. Defaults to `0`. - Number of color channels for the decoded image. -* `dtype`: An optional `tf.DType` from: `tf.uint8, tf.uint16`. Defaults to `tf.uint8`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `dtype`. 3-D with shape `[height, width, channels]`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.image.per_image_whitening.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.image.per_image_whitening.md deleted file mode 100644 index 8f72af6a31..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.image.per_image_whitening.md +++ /dev/null @@ -1,29 +0,0 @@ -### `tf.image.per_image_whitening(image)` {#per_image_whitening} - -Linearly scales `image` to have zero mean and unit norm. - -This op computes `(x - mean) / adjusted_stddev`, where `mean` is the average -of all values in image, and -`adjusted_stddev = max(stddev, 1.0/sqrt(image.NumElements()))`. - -`stddev` is the standard deviation of all values in `image`. It is capped -away from zero to protect against division by 0 when handling uniform images. - -Note that this implementation is limited: -* It only whitens based on the statistics of an individual image. -* It does not take into account the covariance structure. - -##### Args: - - -* `image`: 3-D tensor of shape `[height, width, channels]`. - -##### Returns: - - The whitened image with same shape as `image`. - -##### Raises: - - -* `ValueError`: if the shape of 'image' is incompatible with this function. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.image.random_flip_up_down.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.image.random_flip_up_down.md new file mode 100644 index 0000000000..7ed36f5df2 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.image.random_flip_up_down.md @@ -0,0 +1,24 @@ +### `tf.image.random_flip_up_down(image, seed=None)` {#random_flip_up_down} + +Randomly flips an image vertically (upside down). + +With a 1 in 2 chance, outputs the contents of `image` flipped along the first +dimension, which is `height`. Otherwise output the image as-is. + +##### Args: + + +* `image`: A 3-D tensor of shape `[height, width, channels].` +* `seed`: A Python integer. Used to create a random seed. See + [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) + for behavior. + +##### Returns: + + A 3-D tensor of the same type and shape as `image`. + +##### Raises: + + +* `ValueError`: if the shape of `image` not supported. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.import_graph_def.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.import_graph_def.md new file mode 100644 index 0000000000..0ff3d621d4 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.import_graph_def.md @@ -0,0 +1,49 @@ +### `tf.import_graph_def(graph_def, input_map=None, return_elements=None, name=None, op_dict=None, producer_op_list=None)` {#import_graph_def} + +Imports the TensorFlow graph in `graph_def` into the Python `Graph`. + +This function provides a way to import a serialized TensorFlow +[`GraphDef`](https://www.tensorflow.org/code/tensorflow/core/framework/graph.proto) +protocol buffer, and extract individual objects in the `GraphDef` as +[`Tensor`](#Tensor) and [`Operation`](#Operation) objects. See +[`Graph.as_graph_def()`](#Graph.as_graph_def) for a way to create a +`GraphDef` proto. + +##### Args: + + +* `graph_def`: A `GraphDef` proto containing operations to be imported into + the default graph. +* `input_map`: A dictionary mapping input names (as strings) in `graph_def` + to `Tensor` objects. The values of the named input tensors in the + imported graph will be re-mapped to the respective `Tensor` values. +* `return_elements`: A list of strings containing operation names in + `graph_def` that will be returned as `Operation` objects; and/or + tensor names in `graph_def` that will be returned as `Tensor` objects. +* `name`: (Optional.) A prefix that will be prepended to the names in + `graph_def`. Defaults to `"import"`. +* `op_dict`: (Optional.) A dictionary mapping op type names to `OpDef` protos. + Must contain an `OpDef` proto for each op type named in `graph_def`. + If omitted, uses the `OpDef` protos registered in the global registry. +* `producer_op_list`: (Optional.) An `OpList` proto with the (possibly stripped) + list of `OpDef`s used by the producer of the graph. If provided, attrs + for ops in `graph_def` that are not in `op_dict` that have their default + value according to `producer_op_list` will be removed. This will allow + some more `GraphDef`s produced by later binaries to be accepted by + earlier binaries. + +##### Returns: + + A list of `Operation` and/or `Tensor` objects from the imported graph, + corresponding to the names in `return_elements`. + +##### Raises: + + +* `TypeError`: If `graph_def` is not a `GraphDef` proto, + `input_map` is not a dictionary mapping strings to `Tensor` objects, + or `return_elements` is not a list of strings. +* `ValueError`: If `input_map`, or `return_elements` contains names that + do not appear in `graph_def`, or `graph_def` is not well-formed (e.g. + it refers to an unknown tensor). + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.initialize_all_variables.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.initialize_all_variables.md new file mode 100644 index 0000000000..9a0e5d8261 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.initialize_all_variables.md @@ -0,0 +1,10 @@ +### `tf.initialize_all_variables()` {#initialize_all_variables} + +Returns an Op that initializes all variables. + +This is just a shortcut for `initialize_variables(all_variables())` + +##### Returns: + + An Op that initializes all variables in the graph. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.invert_permutation.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.invert_permutation.md deleted file mode 100644 index b12cc7e94c..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.invert_permutation.md +++ /dev/null @@ -1,30 +0,0 @@ -### `tf.invert_permutation(x, name=None)` {#invert_permutation} - -Computes the inverse permutation of a tensor. - -This operation computes the inverse of an index permutation. It takes a 1-D -integer tensor `x`, which represents the indices of a zero-based array, and -swaps each value with its index position. In other words, for an output tensor -`y` and an input tensor `x`, this operation computes the following: - -`y[x[i]] = i for i in [0, 1, ..., len(x) - 1]` - -The values must include 0. There can be no duplicate values or negative values. - -For example: - -```prettyprint -# tensor `x` is [3, 4, 0, 2, 1] -invert_permutation(x) ==> [2, 4, 3, 0, 1] -``` - -##### Args: - - -* `x`: A `Tensor` of type `int32`. 1-D. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `int32`. 1-D. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.is_nan.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.is_nan.md new file mode 100644 index 0000000000..1bf3a6825c --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.is_nan.md @@ -0,0 +1,14 @@ +### `tf.is_nan(x, name=None)` {#is_nan} + +Returns which elements of x are NaN. + +##### Args: + + +* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `bool`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.local_variables.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.local_variables.md new file mode 100644 index 0000000000..b3612c7cbf --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.local_variables.md @@ -0,0 +1,8 @@ +### `tf.local_variables()` {#local_variables} + +Returns all variables created with collection=[LOCAL_VARIABLES]. + +##### Returns: + + A list of local Variable objects. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.log.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.log.md deleted file mode 100644 index 4ce9ddac8c..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.log.md +++ /dev/null @@ -1,16 +0,0 @@ -### `tf.log(x, name=None)` {#log} - -Computes natural logarithm of x element-wise. - -I.e., \\(y = \log_e x\\). - -##### Args: - - -* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `x`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.matmul.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.matmul.md new file mode 100644 index 0000000000..6602562ecc --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.matmul.md @@ -0,0 +1,46 @@ +### `tf.matmul(a, b, transpose_a=False, transpose_b=False, a_is_sparse=False, b_is_sparse=False, name=None)` {#matmul} + +Multiplies matrix `a` by matrix `b`, producing `a` * `b`. + +The inputs must be two-dimensional matrices, with matching inner dimensions, +possibly after transposition. + +Both matrices must be of the same type. The supported types are: +`float`, `double`, `int32`, `complex64`. + +Either matrix can be transposed on the fly by setting the corresponding flag +to `True`. This is `False` by default. + +If one or both of the matrices contain a lot of zeros, a more efficient +multiplication algorithm can be used by setting the corresponding +`a_is_sparse` or `b_is_sparse` flag to `True`. These are `False` by default. + +For example: + +```python +# 2-D tensor `a` +a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3]) => [[1. 2. 3.] + [4. 5. 6.]] +# 2-D tensor `b` +b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2]) => [[7. 8.] + [9. 10.] + [11. 12.]] +c = tf.matmul(a, b) => [[58 64] + [139 154]] +``` + +##### Args: + + +* `a`: `Tensor` of type `float`, `double`, `int32` or `complex64`. +* `b`: `Tensor` with same type as `a`. +* `transpose_a`: If `True`, `a` is transposed before multiplication. +* `transpose_b`: If `True`, `b` is transposed before multiplication. +* `a_is_sparse`: If `True`, `a` is treated as a sparse matrix. +* `b_is_sparse`: If `True`, `b` is treated as a sparse matrix. +* `name`: Name for the operation (optional). + +##### Returns: + + A `Tensor` of the same type as `a`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.matrix_inverse.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.matrix_inverse.md new file mode 100644 index 0000000000..4172badef5 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.matrix_inverse.md @@ -0,0 +1,27 @@ +### `tf.matrix_inverse(input, adjoint=None, name=None)` {#matrix_inverse} + +Calculates the inverse of a square invertible matrix or its adjoint (conjugate + +transpose). + +The op uses LU decomposition with partial pivoting to compute the inverse. + +If the matrix is not invertible there is no guarantee what the op does. It +may detect the condition and raise an exception or it may simply return a +garbage result. + +##### Args: + + +* `input`: A `Tensor`. Must be one of the following types: `float64`, `float32`. + Shape is `[M, M]`. +* `adjoint`: An optional `bool`. Defaults to `False`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `input`. + Shape is `[M, M]`. If `adjoint` is `False` then `output` contains the + matrix inverse of `input`. If `adjoint` is `True` then `output` contains the + matrix inverse of the adjoint of `input`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.matrix_solve.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.matrix_solve.md deleted file mode 100644 index b33decd2e9..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.matrix_solve.md +++ /dev/null @@ -1,21 +0,0 @@ -### `tf.matrix_solve(matrix, rhs, adjoint=None, name=None)` {#matrix_solve} - -Solves a system of linear equations. Checks for invertibility. - -##### Args: - - -* `matrix`: A `Tensor`. Must be one of the following types: `float64`, `float32`. - Shape is `[M, M]`. -* `rhs`: A `Tensor`. Must have the same type as `matrix`. Shape is `[M, K]`. -* `adjoint`: An optional `bool`. Defaults to `False`. - Boolean indicating whether to solve with `matrix` or its adjoint. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `matrix`. - Shape is `[M, K]`. If `adjoint` is `False` then `output` that solves - `matrix` * `output` = `rhs`. If `adjoint` is `True` then `output` that solves - `adjoint(matrix)` * `output` = `rhs`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.matrix_triangular_solve.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.matrix_triangular_solve.md new file mode 100644 index 0000000000..5787145231 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.matrix_triangular_solve.md @@ -0,0 +1,35 @@ +### `tf.matrix_triangular_solve(matrix, rhs, lower=None, adjoint=None, name=None)` {#matrix_triangular_solve} + +Solves a system of linear equations with an upper or lower triangular matrix by + +backsubstitution. + +`matrix` is a matrix of shape `[M, M]`. If `lower` is `True` then the strictly +upper triangular part of `matrix` is assumed to be zero and not accessed. +If `lower` is False then the strictly lower triangular part of `matrix` is +assumed to be zero and not accessed. +`rhs` is a matrix of shape [M, K]`. + +The output is a matrix of shape `[M, K]`. If `adjoint` is `False` the output +satisfies the matrix equation `matrix` * `output` = `rhs`. +If `adjoint` is `False` then `output` satisfies the matrix equation +`matrix` * `output` = `rhs`. +If `adjoint` is `True` then `output` satisfies the matrix equation +`adjoint(matrix)` * `output` = `rhs`. + +##### Args: + + +* `matrix`: A `Tensor`. Must be one of the following types: `float64`, `float32`. + Shape is `[M, M]`. +* `rhs`: A `Tensor`. Must have the same type as `matrix`. Shape is `[M, K]`. +* `lower`: An optional `bool`. Defaults to `True`. + Boolean indicating whether `matrix` is lower or upper triangular +* `adjoint`: An optional `bool`. Defaults to `False`. + Boolean indicating whether to solve with `matrix` or its adjoint. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `matrix`. Shape is `[M, K]`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.mod.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.mod.md deleted file mode 100644 index 5bfe1058a7..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.mod.md +++ /dev/null @@ -1,15 +0,0 @@ -### `tf.mod(x, y, name=None)` {#mod} - -Returns element-wise remainder of division. - -##### Args: - - -* `x`: A `Tensor`. Must be one of the following types: `int32`, `int64`, `float32`, `float64`. -* `y`: A `Tensor`. Must have the same type as `x`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `x`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.dropout.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.dropout.md deleted file mode 100644 index 4f2b7c0214..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.dropout.md +++ /dev/null @@ -1,38 +0,0 @@ -### `tf.nn.dropout(x, keep_prob, noise_shape=None, seed=None, name=None)` {#dropout} - -Computes dropout. - -With probability `keep_prob`, outputs the input element scaled up by -`1 / keep_prob`, otherwise outputs `0`. The scaling is so that the expected -sum is unchanged. - -By default, each element is kept or dropped independently. If `noise_shape` -is specified, it must be -[broadcastable](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) -to the shape of `x`, and only dimensions with `noise_shape[i] == shape(x)[i]` -will make independent decisions. For example, if `shape(x) = [k, l, m, n]` -and `noise_shape = [k, 1, 1, n]`, each batch and channel component will be -kept independently and each row and column will be kept or not kept together. - -##### Args: - - -* `x`: A tensor. -* `keep_prob`: A scalar `Tensor` with the same type as x. The probability - that each element is kept. -* `noise_shape`: A 1-D `Tensor` of type `int32`, representing the - shape for randomly generated keep/drop flags. -* `seed`: A Python integer. Used to create random seeds. See - [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) - for behavior. -* `name`: A name for this operation (optional). - -##### Returns: - - A Tensor of the same shape of `x`. - -##### Raises: - - -* `ValueError`: If `keep_prob` is not in `(0, 1]`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.in_top_k.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.in_top_k.md new file mode 100644 index 0000000000..f46780649d --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.in_top_k.md @@ -0,0 +1,33 @@ +### `tf.nn.in_top_k(predictions, targets, k, name=None)` {#in_top_k} + +Says whether the targets are in the top `K` predictions. + +This outputs a `batch_size` bool array, an entry `out[i]` is `true` if the +prediction for the target class is among the top `k` predictions among +all predictions for example `i`. Note that the behavior of `InTopK` differs +from the `TopK` op in its handling of ties; if multiple classes have the +same prediction value and straddle the top-`k` boundary, all of those +classes are considered to be in the top `k`. + +More formally, let + + \\(predictions_i\\) be the predictions for all classes for example `i`, + \\(targets_i\\) be the target class for example `i`, + \\(out_i\\) be the output for example `i`, + +$$out_i = predictions_{i, targets_i} \in TopKIncludingTies(predictions_i)$$ + +##### Args: + + +* `predictions`: A `Tensor` of type `float32`. + A `batch_size` x `classes` tensor. +* `targets`: A `Tensor`. Must be one of the following types: `int32`, `int64`. + A `batch_size` vector of class ids. +* `k`: An `int`. Number of top elements to look at for computing precision. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `bool`. Computed Precision at `k` as a `bool Tensor`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.l2_normalize.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.l2_normalize.md deleted file mode 100644 index fdcdd71e20..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.l2_normalize.md +++ /dev/null @@ -1,24 +0,0 @@ -### `tf.nn.l2_normalize(x, dim, epsilon=1e-12, name=None)` {#l2_normalize} - -Normalizes along dimension `dim` using an L2 norm. - -For a 1-D tensor with `dim = 0`, computes - - output = x / sqrt(max(sum(x**2), epsilon)) - -For `x` with more dimensions, independently normalizes each 1-D slice along -dimension `dim`. - -##### Args: - - -* `x`: A `Tensor`. -* `dim`: Dimension along which to normalize. -* `epsilon`: A lower bound value for the norm. Will use `sqrt(epsilon)` as the - divisor if `norm < sqrt(epsilon)`. -* `name`: A name for this operation (optional). - -##### Returns: - - A `Tensor` with the same shape as `x`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.learned_unigram_candidate_sampler.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.learned_unigram_candidate_sampler.md new file mode 100644 index 0000000000..4f69938e59 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.learned_unigram_candidate_sampler.md @@ -0,0 +1,53 @@ +### `tf.nn.learned_unigram_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, seed=None, name=None)` {#learned_unigram_candidate_sampler} + +Samples a set of classes from a distribution learned during training. + +This operation randomly samples a tensor of sampled classes +(`sampled_candidates`) from the range of integers `[0, range_max)`. + +The elements of `sampled_candidates` are drawn without replacement +(if `unique=True`) or with replacement (if `unique=False`) from +the base distribution. + +The base distribution for this operation is constructed on the fly +during training. It is a unigram distribution over the target +classes seen so far during training. Every integer in `[0, range_max)` +begins with a weight of 1, and is incremented by 1 each time it is +seen as a target class. The base distribution is not saved to checkpoints, +so it is reset when the model is reloaded. + +In addition, this operation returns tensors `true_expected_count` +and `sampled_expected_count` representing the number of times each +of the target classes (`true_classes`) and the sampled +classes (`sampled_candidates`) is expected to occur in an average +tensor of sampled classes. These values correspond to `Q(y|x)` +defined in [this +document](http://www.tensorflow.org/extras/candidate_sampling.pdf). +If `unique=True`, then these are post-rejection probabilities and we +compute them approximately. + +##### Args: + + +* `true_classes`: A `Tensor` of type `int64` and shape `[batch_size, + num_true]`. The target classes. +* `num_true`: An `int`. The number of target classes per training example. +* `num_sampled`: An `int`. The number of classes to randomly sample per batch. +* `unique`: A `bool`. Determines whether all sampled classes in a batch are + unique. +* `range_max`: An `int`. The number of possible classes. +* `seed`: An `int`. An operation-specific seed. Default is 0. +* `name`: A name for the operation (optional). + +##### Returns: + + +* `sampled_candidates`: A tensor of type `int64` and shape `[num_sampled]`. + The sampled classes. +* `true_expected_count`: A tensor of type `float`. Same shape as + `true_classes`. The expected counts under the sampling distribution + of each of `true_classes`. +* `sampled_expected_count`: A tensor of type `float`. Same shape as + `sampled_candidates`. The expected counts under the sampling distribution + of each of `sampled_candidates`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.normalize_moments.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.normalize_moments.md deleted file mode 100644 index d7a6b9cab4..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.normalize_moments.md +++ /dev/null @@ -1,20 +0,0 @@ -### `tf.nn.normalize_moments(counts, mean_ss, variance_ss, shift, name=None)` {#normalize_moments} - -Calculate the mean and variance of based on the sufficient statistics. - -##### Args: - - -* `counts`: A `Tensor` containing a the total count of the data (one value). -* `mean_ss`: A `Tensor` containing the mean sufficient statistics: the (possibly - shifted) sum of the elements to average over. -* `variance_ss`: A `Tensor` containing the variance sufficient statistics: the - (possibly shifted) squared sum of the data to compute the variance over. -* `shift`: A `Tensor` containing the value by which the data is shifted for - numerical stability, or `None` if no shift was performed. -* `name`: Name used to scope the operations that compute the moments. - -##### Returns: - - Two `Tensor` objects: `mean` and `variance`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.relu6.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.relu6.md new file mode 100644 index 0000000000..9695e557eb --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.relu6.md @@ -0,0 +1,15 @@ +### `tf.nn.relu6(features, name=None)` {#relu6} + +Computes Rectified Linear 6: `min(max(features, 0), 6)`. + +##### Args: + + +* `features`: A `Tensor` with type `float`, `double`, `int32`, `int64`, `uint8`, + `int16`, or `int8`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` with the same type as `features`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.ones.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.ones.md deleted file mode 100644 index 8a4c9073d0..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.ones.md +++ /dev/null @@ -1,24 +0,0 @@ -### `tf.ones(shape, dtype=tf.float32, name=None)` {#ones} - -Creates a tensor with all elements set to 1. - -This operation returns a tensor of type `dtype` with shape `shape` and all -elements set to 1. - -For example: - -```python -tf.ones([2, 3], int32) ==> [[1, 1, 1], [1, 1, 1]] -``` - -##### Args: - - -* `shape`: Either a list of integers, or a 1-D `Tensor` of type `int32`. -* `dtype`: The type of an element in the resulting `Tensor`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` with all elements set to 1. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.pack.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.pack.md deleted file mode 100644 index 75a5fbe15c..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.pack.md +++ /dev/null @@ -1,23 +0,0 @@ -### `tf.pack(values, name='pack')` {#pack} - -Packs a list of rank-`R` tensors into one rank-`(R+1)` tensor. - -Packs tensors in `values` into a tensor with rank one higher than each tensor -in `values` and shape `[len(values)] + values[0].shape`. The output satisfies -`output[i, ...] = values[i][...]`. - -This is the opposite of unpack. The numpy equivalent is - - tf.pack([x, y, z]) = np.asarray([x, y, z]) - -##### Args: - - -* `values`: A list of `Tensor` objects with the same shape and type. -* `name`: A name for this operation (optional). - -##### Returns: - - -* `output`: A packed `Tensor` with the same type as `values`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.random_shuffle.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.random_shuffle.md deleted file mode 100644 index 14f40d64af..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.random_shuffle.md +++ /dev/null @@ -1,29 +0,0 @@ -### `tf.random_shuffle(value, seed=None, name=None)` {#random_shuffle} - -Randomly shuffles a tensor along its first dimension. - -The tensor is shuffled along dimension 0, such that each `value[j]` is mapped -to one and only one `output[i]`. For example, a mapping that might occur for a -3x2 tensor is: - -```python -[[1, 2], [[5, 6], - [3, 4], ==> [1, 2], - [5, 6]] [3, 4]] -``` - -##### Args: - - -* `value`: A Tensor to be shuffled. -* `seed`: A Python integer. Used to create a random seed for the distribution. - See - [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) - for behavior. -* `name`: A name for the operation (optional). - -##### Returns: - - A tensor of same shape and type as `value`, shuffled along its first - dimension. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.random_uniform_initializer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.random_uniform_initializer.md deleted file mode 100644 index 1afd318d3b..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.random_uniform_initializer.md +++ /dev/null @@ -1,25 +0,0 @@ -### `tf.random_uniform_initializer(minval=0.0, maxval=1.0, seed=None, dtype=tf.float32)` {#random_uniform_initializer} - -Returns an initializer that generates tensors with a uniform distribution. - -##### Args: - - -* `minval`: a python scalar or a scalar tensor. lower bound of the range - of random values to generate. -* `maxval`: a python scalar or a scalar tensor. upper bound of the range - of random values to generate. -* `seed`: A Python integer. Used to create random seeds. See - [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) - for behavior. -* `dtype`: The data type. Only floating point types are supported. - -##### Returns: - - An initializer that generates tensors with a uniform distribution. - -##### Raises: - - -* `ValueError`: if `dtype` is not a floating point type. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.reduce_min.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.reduce_min.md deleted file mode 100644 index c93a902adc..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.reduce_min.md +++ /dev/null @@ -1,25 +0,0 @@ -### `tf.reduce_min(input_tensor, reduction_indices=None, keep_dims=False, name=None)` {#reduce_min} - -Computes the minimum of elements across dimensions of a tensor. - -Reduces `input_tensor` along the dimensions given in `reduction_indices`. -Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each -entry in `reduction_indices`. If `keep_dims` is true, the reduced dimensions -are retained with length 1. - -If `reduction_indices` has no entries, all dimensions are reduced, and a -tensor with a single element is returned. - -##### Args: - - -* `input_tensor`: The tensor to reduce. Should have numeric type. -* `reduction_indices`: The dimensions to reduce. If `None` (the default), - reduces all dimensions. -* `keep_dims`: If true, retains reduced dimensions with length 1. -* `name`: A name for the operation (optional). - -##### Returns: - - The reduced tensor. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.scalar_mul.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.scalar_mul.md deleted file mode 100644 index 5af291597d..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.scalar_mul.md +++ /dev/null @@ -1,23 +0,0 @@ -### `tf.scalar_mul(scalar, x)` {#scalar_mul} - -Multiplies a scalar times a `Tensor` or `IndexedSlices` object. - -Intended for use in gradient code which might deal with `IndexedSlices` -objects, which are easy to multiply by a scalar but more expensive to -multiply with arbitrary tensors. - -##### Args: - - -* `scalar`: A 0-D scalar `Tensor`. Must have known shape. -* `x`: A `Tensor` or `IndexedSlices` to be scaled. - -##### Returns: - - `scalar * x` of the same type (`Tensor` or `IndexedSlices`) as `x`. - -##### Raises: - - -* `ValueError`: if scalar is not a 0-D `scalar`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.scalar_summary.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.scalar_summary.md deleted file mode 100644 index 1e8c3479e4..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.scalar_summary.md +++ /dev/null @@ -1,21 +0,0 @@ -### `tf.scalar_summary(tags, values, collections=None, name=None)` {#scalar_summary} - -Outputs a `Summary` protocol buffer with scalar values. - -The input `tags` and `values` must have the same shape. The generated -summary has a summary value for each tag-value pair in `tags` and `values`. - -##### Args: - - -* `tags`: A `string` `Tensor`. Tags for the summaries. -* `values`: A real numeric Tensor. Values for the summaries. -* `collections`: Optional list of graph collections keys. The new summary op is - added to these collections. Defaults to `[GraphKeys.SUMMARIES]`. -* `name`: A name for the operation (optional). - -##### Returns: - - A scalar `Tensor` of type `string`. The serialized `Summary` protocol - buffer. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.segment_max.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.segment_max.md deleted file mode 100644 index c9d7a28900..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.segment_max.md +++ /dev/null @@ -1,30 +0,0 @@ -### `tf.segment_max(data, segment_ids, name=None)` {#segment_max} - -Computes the maximum along segments of a tensor. - -Read [the section on Segmentation](../../api_docs/python/math_ops.md#segmentation) -for an explanation of segments. - -Computes a tensor such that -\\(output_i = \max_j(data_j)\\) where `max` is over `j` such -that `segment_ids[j] == i`. - -
- -
- -##### Args: - - -* `data`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`. -* `segment_ids`: A `Tensor`. Must be one of the following types: `int32`, `int64`. - A 1-D tensor whose rank is equal to the rank of `data`'s - first dimension. Values should be sorted and can be repeated. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `data`. - Has same shape as data, except for dimension 0 which - has size `k`, the number of segments. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.segment_mean.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.segment_mean.md deleted file mode 100644 index 5d901859a9..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.segment_mean.md +++ /dev/null @@ -1,32 +0,0 @@ -### `tf.segment_mean(data, segment_ids, name=None)` {#segment_mean} - -Computes the mean along segments of a tensor. - -Read [the section on -Segmentation](../../api_docs/python/math_ops.md#segmentation) for an explanation -of segments. - -Computes a tensor such that -\\(output_i = \frac{\sum_j data_j}{N}\\) where `mean` is -over `j` such that `segment_ids[j] == i` and `N` is the total number of -values summed. - -
- -
- -##### Args: - - -* `data`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`. -* `segment_ids`: A `Tensor`. Must be one of the following types: `int32`, `int64`. - A 1-D tensor whose rank is equal to the rank of `data`'s - first dimension. Values should be sorted and can be repeated. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `data`. - Has same shape as data, except for dimension 0 which - has size `k`, the number of segments. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.sigmoid.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.sigmoid.md deleted file mode 100644 index b056a48716..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.sigmoid.md +++ /dev/null @@ -1,18 +0,0 @@ -### `tf.sigmoid(x, name=None)` {#sigmoid} - -Computes sigmoid of `x` element-wise. - -Specifically, `y = 1 / (1 + exp(-x))`. - -##### Args: - - -* `x`: A Tensor with type `float`, `double`, `int32`, `complex64`, `int64`, - or `qint32`. -* `name`: A name for the operation (optional). - -##### Returns: - - A Tensor with the same type as `x` if `x.dtype != qint32` - otherwise the return type is `quint8`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.space_to_depth.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.space_to_depth.md new file mode 100644 index 0000000000..68706d2e5a --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.space_to_depth.md @@ -0,0 +1,87 @@ +### `tf.space_to_depth(input, block_size, name=None)` {#space_to_depth} + +SpaceToDepth for tensors of type T. + +Rearranges blocks of spatial data, into depth. More specifically, +this op outputs a copy of the input tensor where values from the `height` +and `width` dimensions are moved to the `depth` dimension. +The attr `block_size` indicates the input block size and how the data is moved. + + * Non-overlapping blocks of size `block_size x block size` are rearranged + into depth at each location. + * The depth of the output tensor is `input_depth * block_size * block_size`. + * The input tensor's height and width must be divisible by block_size. + +That is, assuming the input is in the shape: +`[batch, height, width, depth]`, +the shape of the output will be: +`[batch, height/block_size, width/block_size, depth*block_size*block_size]` + +This operation requires that the input tensor be of rank 4, and that +`block_size` be >=1 and a divisor of both the input `height` and `width`. + +This operation is useful for resizing the activations between convolutions +(but keeping all data), e.g. instead of pooling. It is also useful for training +purely convolutional models. + +For example, given this input of shape `[1, 2, 2, 1]`, and block_size of 2: + +```prettyprint +x = [[[[1], [2]], + [[3], [4]]]] +``` + +This operation will output a tensor of shape `[1, 1, 1, 4]`: + +```prettyprint +[[[[1, 2, 3, 4]]]] +``` + +Here, the input has a batch of 1 and each batch element has shape `[2, 2, 1]`, +the corresponding output will have a single element (i.e. width and height are +both 1) and will have a depth of 4 channels (1 * block_size * block_size). +The output element shape is `[1, 1, 4]`. + +For an input tensor with larger depth, here of shape `[1, 2, 2, 3]`, e.g. + +```prettyprint +x = [[[[1, 2, 3], [4, 5, 6]], + [[7, 8, 9], [10, 11, 12]]]] +``` + +This operation, for block_size of 2, will return the following tensor of shape +`[1, 1, 1, 12]` + +```prettyprint +[[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]] +``` + +Similarly, for the following input of shape `[1 4 4 1]`, and a block size of 2: + +```prettyprint +x = [[[[1], [2], [5], [6]], + [[3], [4], [7], [8]], + [[9], [10], [13], [14]], + [[11], [12], [15], [16]]]] +``` + +the operator will return the following tensor of shape `[1 2 2 4]`: + +```prettyprint +x = [[[[1, 2, 3, 4], + [5, 6, 7, 8]], + [[9, 10, 11, 12], + [13, 14, 15, 16]]]] +``` + +##### Args: + + +* `input`: A `Tensor`. +* `block_size`: An `int`. The size of the spatial block. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `input`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.sparse_segment_mean.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.sparse_segment_mean.md deleted file mode 100644 index d95830b8a9..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.sparse_segment_mean.md +++ /dev/null @@ -1,27 +0,0 @@ -### `tf.sparse_segment_mean(data, indices, segment_ids, name=None)` {#sparse_segment_mean} - -Computes the mean along sparse segments of a tensor. - -Read [the section on -Segmentation](../../api_docs/python/math_ops.md#segmentation) for an explanation -of segments. - -Like `SegmentMean`, but `segment_ids` can have rank less than `data`'s first -dimension, selecting a subset of dimension 0, specified by `indices`. - -##### Args: - - -* `data`: A `Tensor`. Must be one of the following types: `float32`, `float64`. -* `indices`: A `Tensor` of type `int32`. - A 1-D tensor. Has same rank as `segment_ids`. -* `segment_ids`: A `Tensor` of type `int32`. - A 1-D tensor. Values should be sorted and can be repeated. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `data`. - Has same shape as data, except for dimension 0 which - has size `k`, the number of segments. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.stop_gradient.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.stop_gradient.md new file mode 100644 index 0000000000..53759f49ff --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.stop_gradient.md @@ -0,0 +1,34 @@ +### `tf.stop_gradient(input, name=None)` {#stop_gradient} + +Stops gradient computation. + +When executed in a graph, this op outputs its input tensor as-is. + +When building ops to compute gradients, this op prevents the contribution of +its inputs to be taken into account. Normally, the gradient generator adds ops +to a graph to compute the derivatives of a specified 'loss' by recursively +finding out inputs that contributed to its computation. If you insert this op +in the graph it inputs are masked from the gradient generator. They are not +taken into account for computing gradients. + +This is useful any time you want to compute a value with TensorFlow but need +to pretend that the value was a constant. Some examples include: + +* The *EM* algorithm where the *M-step* should not involve backpropagation + through the output of the *E-step*. +* Contrastive divergence training of Boltzmann machines where, when + differentiating the energy function, the training must not backpropagate + through the graph that generated the samples from the model. +* Adversarial training, where no backprop should happen through the adversarial + example generation process. + +##### Args: + + +* `input`: A `Tensor`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `input`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.test.assert_equal_graph_def.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.test.assert_equal_graph_def.md deleted file mode 100644 index 653236cf9f..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.test.assert_equal_graph_def.md +++ /dev/null @@ -1,20 +0,0 @@ -### `tf.test.assert_equal_graph_def(actual, expected)` {#assert_equal_graph_def} - -Asserts that two `GraphDef`s are (mostly) the same. - -Compares two `GraphDef` protos for equality, ignoring versions and ordering of -nodes, attrs, and control inputs. Node names are used to match up nodes -between the graphs, so the naming of nodes must be consistent. - -##### Args: - - -* `actual`: The `GraphDef` we have. -* `expected`: The `GraphDef` we expected. - -##### Raises: - - -* `AssertionError`: If the `GraphDef`s do not match. -* `TypeError`: If either argument is not a `GraphDef`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.test.compute_gradient_error.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.test.compute_gradient_error.md new file mode 100644 index 0000000000..d2c91a66b3 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.test.compute_gradient_error.md @@ -0,0 +1,36 @@ +### `tf.test.compute_gradient_error(x, x_shape, y, y_shape, x_init_value=None, delta=0.001, init_targets=None)` {#compute_gradient_error} + +Computes the gradient error. + +Computes the maximum error for dy/dx between the computed Jacobian and the +numerically estimated Jacobian. + +This function will modify the tensors passed in as it adds more operations +and hence changing the consumers of the operations of the input tensors. + +This function adds operations to the current session. To compute the error +using a particular device, such as a GPU, use the standard methods for +setting a device (e.g. using with sess.graph.device() or setting a device +function in the session constructor). + +##### Args: + + +* `x`: a tensor or list of tensors +* `x_shape`: the dimensions of x as a tuple or an array of ints. If x is a list, + then this is the list of shapes. + +* `y`: a tensor +* `y_shape`: the dimensions of y as a tuple or an array of ints. +* `x_init_value`: (optional) a numpy array of the same shape as "x" + representing the initial value of x. If x is a list, this should be a list + of numpy arrays. If this is none, the function will pick a random tensor + as the initial value. +* `delta`: (optional) the amount of perturbation. +* `init_targets`: list of targets to run to initialize model params. + TODO(mrry): Remove this argument. + +##### Returns: + + The maximum error in between the two Jacobians. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.to_bfloat16.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.to_bfloat16.md new file mode 100644 index 0000000000..3d55da1110 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.to_bfloat16.md @@ -0,0 +1,19 @@ +### `tf.to_bfloat16(x, name='ToBFloat16')` {#to_bfloat16} + +Casts a tensor to type `bfloat16`. + +##### Args: + + +* `x`: A `Tensor` or `SparseTensor`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` or `SparseTensor` with same shape as `x` with type `bfloat16`. + +##### Raises: + + +* `TypeError`: If `x` cannot be cast to the `bfloat16`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.to_float.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.to_float.md new file mode 100644 index 0000000000..b45b49b982 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.to_float.md @@ -0,0 +1,19 @@ +### `tf.to_float(x, name='ToFloat')` {#to_float} + +Casts a tensor to type `float32`. + +##### Args: + + +* `x`: A `Tensor` or `SparseTensor`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` or `SparseTensor` with same shape as `x` with type `float32`. + +##### Raises: + + +* `TypeError`: If `x` cannot be cast to the `float32`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.LooperThread.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.LooperThread.md deleted file mode 100644 index 046f35d718..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.LooperThread.md +++ /dev/null @@ -1,215 +0,0 @@ -A thread that runs code repeatedly, optionally on a timer. - -This thread class is intended to be used with a `Coordinator`. It repeatedly -runs code specified either as `target` and `args` or by the `run_loop()` -method. - -Before each run the thread checks if the coordinator has requested stop. In -that case the looper thread terminates immediately. - -If the code being run raises an exception, that exception is reported to the -coordinator and the thread terminates. The coordinator will then request all -the other threads it coordinates to stop. - -You typically pass looper threads to the supervisor `Join()` method. -- - - - -#### `tf.train.LooperThread.__init__(coord, timer_interval_secs, target=None, args=None, kwargs=None)` {#LooperThread.__init__} - -Create a LooperThread. - -##### Args: - - -* `coord`: A Coordinator. -* `timer_interval_secs`: Time boundaries at which to call Run(), or None - if it should be called back to back. -* `target`: Optional callable object that will be executed in the thread. -* `args`: Optional arguments to pass to `target` when calling it. -* `kwargs`: Optional keyword arguments to pass to `target` when calling it. - -##### Raises: - - -* `ValueError`: If one of the arguments is invalid. - - -- - - - -#### `tf.train.LooperThread.daemon` {#LooperThread.daemon} - -A boolean value indicating whether this thread is a daemon thread (True) or not (False). - -This must be set before start() is called, otherwise RuntimeError is -raised. Its initial value is inherited from the creating thread; the -main thread is not a daemon thread and therefore all threads created in -the main thread default to daemon = False. - -The entire Python program exits when no alive non-daemon threads are -left. - - -- - - - -#### `tf.train.LooperThread.getName()` {#LooperThread.getName} - - - - -- - - - -#### `tf.train.LooperThread.ident` {#LooperThread.ident} - -Thread identifier of this thread or None if it has not been started. - -This is a nonzero integer. See the thread.get_ident() function. Thread -identifiers may be recycled when a thread exits and another thread is -created. The identifier is available even after the thread has exited. - - -- - - - -#### `tf.train.LooperThread.isAlive()` {#LooperThread.isAlive} - -Return whether the thread is alive. - -This method returns True just before the run() method starts until just -after the run() method terminates. The module function enumerate() -returns a list of all alive threads. - - -- - - - -#### `tf.train.LooperThread.isDaemon()` {#LooperThread.isDaemon} - - - - -- - - - -#### `tf.train.LooperThread.is_alive()` {#LooperThread.is_alive} - -Return whether the thread is alive. - -This method returns True just before the run() method starts until just -after the run() method terminates. The module function enumerate() -returns a list of all alive threads. - - -- - - - -#### `tf.train.LooperThread.join(timeout=None)` {#LooperThread.join} - -Wait until the thread terminates. - -This blocks the calling thread until the thread whose join() method is -called terminates -- either normally or through an unhandled exception -or until the optional timeout occurs. - -When the timeout argument is present and not None, it should be a -floating point number specifying a timeout for the operation in seconds -(or fractions thereof). As join() always returns None, you must call -isAlive() after join() to decide whether a timeout happened -- if the -thread is still alive, the join() call timed out. - -When the timeout argument is not present or None, the operation will -block until the thread terminates. - -A thread can be join()ed many times. - -join() raises a RuntimeError if an attempt is made to join the current -thread as that would cause a deadlock. It is also an error to join() a -thread before it has been started and attempts to do so raises the same -exception. - - -- - - - -#### `tf.train.LooperThread.loop(coord, timer_interval_secs, target, args=None, kwargs=None)` {#LooperThread.loop} - -Start a LooperThread that calls a function periodically. - -If `timer_interval_secs` is None the thread calls `target(args)` -repeatedly. Otherwise `target(args)` is called every `timer_interval_secs` -seconds. The thread terminates when a stop of the coordinator is -requested. - -##### Args: - - -* `coord`: A Coordinator. -* `timer_interval_secs`: Number. Time boundaries at which to call `target`. -* `target`: A callable object. -* `args`: Optional arguments to pass to `target` when calling it. -* `kwargs`: Optional keyword arguments to pass to `target` when calling it. - -##### Returns: - - The started thread. - - -- - - - -#### `tf.train.LooperThread.name` {#LooperThread.name} - -A string used for identification purposes only. - -It has no semantics. Multiple threads may be given the same name. The -initial name is set by the constructor. - - -- - - - -#### `tf.train.LooperThread.run()` {#LooperThread.run} - - - - -- - - - -#### `tf.train.LooperThread.run_loop()` {#LooperThread.run_loop} - -Called at 'timer_interval_secs' boundaries. - - -- - - - -#### `tf.train.LooperThread.setDaemon(daemonic)` {#LooperThread.setDaemon} - - - - -- - - - -#### `tf.train.LooperThread.setName(name)` {#LooperThread.setName} - - - - -- - - - -#### `tf.train.LooperThread.start()` {#LooperThread.start} - -Start the thread's activity. - -It must be called at most once per thread object. It arranges for the -object's run() method to be invoked in a separate thread of control. - -This method will raise a RuntimeError if called more than once on the -same thread object. - - -- - - - -#### `tf.train.LooperThread.start_loop()` {#LooperThread.start_loop} - -Called when the thread starts. - - -- - - - -#### `tf.train.LooperThread.stop_loop()` {#LooperThread.stop_loop} - -Called when the thread stops. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.MomentumOptimizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.MomentumOptimizer.md new file mode 100644 index 0000000000..45256f65fc --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.MomentumOptimizer.md @@ -0,0 +1,18 @@ +Optimizer that implements the Momentum algorithm. + +- - - + +#### `tf.train.MomentumOptimizer.__init__(learning_rate, momentum, use_locking=False, name='Momentum')` {#MomentumOptimizer.__init__} + +Construct a new Momentum optimizer. + +##### Args: + + +* `learning_rate`: A `Tensor` or a floating point value. The learning rate. +* `momentum`: A `Tensor` or a floating point value. The momentum. +* `use_locking`: If `True` use locks for update operations. +* `name`: Optional name prefix for the operations created when applying + gradients. Defaults to "Momentum". + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.Saver.from_proto.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.Saver.from_proto.md new file mode 100644 index 0000000000..247f621e8a --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.Saver.from_proto.md @@ -0,0 +1,4 @@ +#### `tf.train.Saver.from_proto(saver_def)` {#Saver.from_proto} + +Returns a `Saver` object created from `saver_def`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.Saver.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.Saver.md new file mode 100644 index 0000000000..8bf255040e --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.Saver.md @@ -0,0 +1,315 @@ +Saves and restores variables. + +See [Variables](../../how_tos/variables/index.md) +for an overview of variables, saving and restoring. + +The `Saver` class adds ops to save and restore variables to and from +*checkpoints*. It also provides convenience methods to run these ops. + +Checkpoints are binary files in a proprietary format which map variable names +to tensor values. The best way to examine the contents of a checkpoint is to +load it using a `Saver`. + +Savers can automatically number checkpoint filenames with a provided counter. +This lets you keep multiple checkpoints at different steps while training a +model. For example you can number the checkpoint filenames with the training +step number. To avoid filling up disks, savers manage checkpoint files +automatically. For example, they can keep only the N most recent files, or +one checkpoint for every N hours of training. + +You number checkpoint filenames by passing a value to the optional +`global_step` argument to `save()`: + +```python +saver.save(sess, 'my-model', global_step=0) ==> filename: 'my-model-0' +... +saver.save(sess, 'my-model', global_step=1000) ==> filename: 'my-model-1000' +``` + +Additionally, optional arguments to the `Saver()` constructor let you control +the proliferation of checkpoint files on disk: + +* `max_to_keep` indicates the maximum number of recent checkpoint files to + keep. As new files are created, older files are deleted. If None or 0, + all checkpoint files are kept. Defaults to 5 (that is, the 5 most recent + checkpoint files are kept.) + +* `keep_checkpoint_every_n_hours`: In addition to keeping the most recent + `max_to_keep` checkpoint files, you might want to keep one checkpoint file + for every N hours of training. This can be useful if you want to later + analyze how a model progressed during a long training session. For + example, passing `keep_checkpoint_every_n_hours=2` ensures that you keep + one checkpoint file for every 2 hours of training. The default value of + 10,000 hours effectively disables the feature. + +Note that you still have to call the `save()` method to save the model. +Passing these arguments to the constructor will not save variables +automatically for you. + +A training program that saves regularly looks like: + +```python +... +# Create a saver. +saver = tf.train.Saver(...variables...) +# Launch the graph and train, saving the model every 1,000 steps. +sess = tf.Session() +for step in xrange(1000000): + sess.run(..training_op..) + if step % 1000 == 0: + # Append the step number to the checkpoint name: + saver.save(sess, 'my-model', global_step=step) +``` + +In addition to checkpoint files, savers keep a protocol buffer on disk with +the list of recent checkpoints. This is used to manage numbered checkpoint +files and by `latest_checkpoint()`, which makes it easy to discover the path +to the most recent checkpoint. That protocol buffer is stored in a file named +'checkpoint' next to the checkpoint files. + +If you create several savers, you can specify a different filename for the +protocol buffer file in the call to `save()`. + +- - - + +#### `tf.train.Saver.__init__(var_list=None, reshape=False, sharded=False, max_to_keep=5, keep_checkpoint_every_n_hours=10000.0, name=None, restore_sequentially=False, saver_def=None, builder=None)` {#Saver.__init__} + +Creates a `Saver`. + +The constructor adds ops to save and restore variables. + +`var_list` specifies the variables that will be saved and restored. It can +be passed as a `dict` or a list: + +* A `dict` of names to variables: The keys are the names that will be + used to save or restore the variables in the checkpoint files. +* A list of variables: The variables will be keyed with their op name in + the checkpoint files. + +For example: + +```python +v1 = tf.Variable(..., name='v1') +v2 = tf.Variable(..., name='v2') + +# Pass the variables as a dict: +saver = tf.train.Saver({'v1': v1, 'v2': v2}) + +# Or pass them as a list. +saver = tf.train.Saver([v1, v2]) +# Passing a list is equivalent to passing a dict with the variable op names +# as keys: +saver = tf.train.Saver({v.op.name: v for v in [v1, v2]}) +``` + +The optional `reshape` argument, if `True`, allows restoring a variable from +a save file where the variable had a different shape, but the same number +of elements and type. This is useful if you have reshaped a variable and +want to reload it from an older checkpoint. + +The optional `sharded` argument, if `True`, instructs the saver to shard +checkpoints per device. + +##### Args: + + +* `var_list`: A list of `Variable` objects or a dictionary mapping names to + variables. If `None`, defaults to the list of all variables. +* `reshape`: If `True`, allows restoring parameters from a checkpoint + where the variables have a different shape. +* `sharded`: If `True`, shard the checkpoints, one per device. +* `max_to_keep`: Maximum number of recent checkpoints to keep. + Defaults to 5. +* `keep_checkpoint_every_n_hours`: How often to keep checkpoints. + Defaults to 10,000 hours. +* `name`: String. Optional name to use as a prefix when adding operations. +* `restore_sequentially`: A `Bool`, which if true, causes restore of different + variables to happen sequentially within each device. This can lower + memory usage when restoring very large models. +* `saver_def`: Optional `SaverDef` proto to use instead of running the + builder. This is only useful for specialty code that wants to recreate + a `Saver` object for a previously built `Graph` that had a `Saver`. + The `saver_def` proto should be the one returned by the + `as_saver_def()` call of the `Saver` that was created for that `Graph`. +* `builder`: Optional `SaverBuilder` to use if a `saver_def` was not provided. + Defaults to `BaseSaverBuilder()`. + +##### Raises: + + +* `TypeError`: If `var_list` is invalid. +* `ValueError`: If any of the keys or values in `var_list` are not unique. + + +- - - + +#### `tf.train.Saver.save(sess, save_path, global_step=None, latest_filename=None, meta_graph_suffix='meta', write_meta_graph=True)` {#Saver.save} + +Saves variables. + +This method runs the ops added by the constructor for saving variables. +It requires a session in which the graph was launched. The variables to +save must also have been initialized. + +The method returns the path of the newly created checkpoint file. This +path can be passed directly to a call to `restore()`. + +##### Args: + + +* `sess`: A Session to use to save the variables. +* `save_path`: String. Path to the checkpoint filename. If the saver is + `sharded`, this is the prefix of the sharded checkpoint filename. +* `global_step`: If provided the global step number is appended to + `save_path` to create the checkpoint filename. The optional argument + can be a `Tensor`, a `Tensor` name or an integer. +* `latest_filename`: Optional name for the protocol buffer file that will + contains the list of most recent checkpoint filenames. That file, + kept in the same directory as the checkpoint files, is automatically + managed by the saver to keep track of recent checkpoints. Defaults to + 'checkpoint'. +* `meta_graph_suffix`: Suffix for `MetaGraphDef` file. Defaults to 'meta'. +* `write_meta_graph`: `Boolean` indicating whether or not to write the meta + graph file. + +##### Returns: + + A string: path at which the variables were saved. If the saver is + sharded, this string ends with: '-?????-of-nnnnn' where 'nnnnn' + is the number of shards created. + +##### Raises: + + +* `TypeError`: If `sess` is not a `Session`. +* `ValueError`: If `latest_filename` contains path components. + + +- - - + +#### `tf.train.Saver.restore(sess, save_path)` {#Saver.restore} + +Restores previously saved variables. + +This method runs the ops added by the constructor for restoring variables. +It requires a session in which the graph was launched. The variables to +restore do not have to have been initialized, as restoring is itself a way +to initialize variables. + +The `save_path` argument is typically a value previously returned from a +`save()` call, or a call to `latest_checkpoint()`. + +##### Args: + + +* `sess`: A `Session` to use to restore the parameters. +* `save_path`: Path where parameters were previously saved. + +##### Raises: + + +* `ValueError`: If the given `save_path` does not point to a file. + + + +Other utility methods. + +- - - + +#### `tf.train.Saver.last_checkpoints` {#Saver.last_checkpoints} + +List of not-yet-deleted checkpoint filenames. + +You can pass any of the returned values to `restore()`. + +##### Returns: + + A list of checkpoint filenames, sorted from oldest to newest. + + +- - - + +#### `tf.train.Saver.set_last_checkpoints(last_checkpoints)` {#Saver.set_last_checkpoints} + +DEPRECATED: Use set_last_checkpoints_with_time. + +Sets the list of old checkpoint filenames. + +##### Args: + + +* `last_checkpoints`: A list of checkpoint filenames. + +##### Raises: + + +* `AssertionError`: If last_checkpoints is not a list. + + +- - - + +#### `tf.train.Saver.as_saver_def()` {#Saver.as_saver_def} + +Generates a `SaverDef` representation of this saver. + +##### Returns: + + A `SaverDef` proto. + + + +#### Other Methods +- - - + +#### `tf.train.Saver.export_meta_graph(filename=None, collection_list=None, as_text=False)` {#Saver.export_meta_graph} + +Writes `MetaGraphDef` to save_path/filename. + +##### Args: + + +* `filename`: Optional meta_graph filename including the path. +* `collection_list`: List of string keys to collect. +* `as_text`: If `True`, writes the meta_graph as an ASCII proto. + +##### Returns: + + A `MetaGraphDef` proto. + + +- - - + +#### `tf.train.Saver.from_proto(saver_def)` {#Saver.from_proto} + +Returns a `Saver` object created from `saver_def`. + + +- - - + +#### `tf.train.Saver.set_last_checkpoints_with_time(last_checkpoints_with_time)` {#Saver.set_last_checkpoints_with_time} + +Sets the list of old checkpoint filenames and timestamps. + +##### Args: + + +* `last_checkpoints_with_time`: A list of tuples of checkpoint filenames and + timestamps. + +##### Raises: + + +* `AssertionError`: If last_checkpoints_with_time is not a list. + + +- - - + +#### `tf.train.Saver.to_proto()` {#Saver.to_proto} + +Converts this `Saver` to a `SaverDef` protocol buffer. + +##### Returns: + + A `SaverDef` protocol buffer. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.SummaryWriter.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.SummaryWriter.md new file mode 100644 index 0000000000..a7f5aef5f1 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.SummaryWriter.md @@ -0,0 +1,170 @@ +Writes `Summary` protocol buffers to event files. + +The `SummaryWriter` class provides a mechanism to create an event file in a +given directory and add summaries and events to it. The class updates the +file contents asynchronously. This allows a training program to call methods +to add data to the file directly from the training loop, without slowing down +training. + +- - - + +#### `tf.train.SummaryWriter.__init__(logdir, graph=None, max_queue=10, flush_secs=120, graph_def=None)` {#SummaryWriter.__init__} + +Creates a `SummaryWriter` and an event file. + +On construction the summary writer creates a new event file in `logdir`. +This event file will contain `Event` protocol buffers constructed when you +call one of the following functions: `add_summary()`, `add_session_log()`, +`add_event()`, or `add_graph()`. + +If you pass a `Graph` to the constructor it is added to +the event file. (This is equivalent to calling `add_graph()` later). + +TensorBoard will pick the graph from the file and display it graphically so +you can interactively explore the graph you built. You will usually pass +the graph from the session in which you launched it: + +```python +...create a graph... +# Launch the graph in a session. +sess = tf.Session() +# Create a summary writer, add the 'graph' to the event file. +writer = tf.train.SummaryWriter(, sess.graph) +``` + +The other arguments to the constructor control the asynchronous writes to +the event file: + +* `flush_secs`: How often, in seconds, to flush the added summaries + and events to disk. +* `max_queue`: Maximum number of summaries or events pending to be + written to disk before one of the 'add' calls block. + +##### Args: + + +* `logdir`: A string. Directory where event file will be written. +* `graph`: A `Graph` object, such as `sess.graph`. +* `max_queue`: Integer. Size of the queue for pending events and summaries. +* `flush_secs`: Number. How often, in seconds, to flush the + pending events and summaries to disk. +* `graph_def`: DEPRECATED: Use the `graph` argument instead. + + + +- - - + +#### `tf.train.SummaryWriter.add_summary(summary, global_step=None)` {#SummaryWriter.add_summary} + +Adds a `Summary` protocol buffer to the event file. + +This method wraps the provided summary in an `Event` protocol buffer +and adds it to the event file. + +You can pass the result of evaluating any summary op, using +[`Session.run()`](client.md#Session.run) or +[`Tensor.eval()`](framework.md#Tensor.eval), to this +function. Alternatively, you can pass a `tf.Summary` protocol +buffer that you populate with your own data. The latter is +commonly done to report evaluation results in event files. + +##### Args: + + +* `summary`: A `Summary` protocol buffer, optionally serialized as a string. +* `global_step`: Number. Optional global step value to record with the + summary. + + +- - - + +#### `tf.train.SummaryWriter.add_session_log(session_log, global_step=None)` {#SummaryWriter.add_session_log} + +Adds a `SessionLog` protocol buffer to the event file. + +This method wraps the provided session in an `Event` procotol buffer +and adds it to the event file. + +##### Args: + + +* `session_log`: A `SessionLog` protocol buffer. +* `global_step`: Number. Optional global step value to record with the + summary. + + +- - - + +#### `tf.train.SummaryWriter.add_event(event)` {#SummaryWriter.add_event} + +Adds an event to the event file. + +##### Args: + + +* `event`: An `Event` protocol buffer. + + +- - - + +#### `tf.train.SummaryWriter.add_graph(graph, global_step=None, graph_def=None)` {#SummaryWriter.add_graph} + +Adds a `Graph` to the event file. + +The graph described by the protocol buffer will be displayed by +TensorBoard. Most users pass a graph in the constructor instead. + +##### Args: + + +* `graph`: A `Graph` object, such as `sess.graph`. +* `global_step`: Number. Optional global step counter to record with the + graph. +* `graph_def`: DEPRECATED. Use the `graph` parameter instead. + +##### Raises: + + +* `ValueError`: If both graph and graph_def are passed to the method. + + +- - - + +#### `tf.train.SummaryWriter.add_run_metadata(run_metadata, tag, global_step=None)` {#SummaryWriter.add_run_metadata} + +Adds a metadata information for a single session.run() call. + +##### Args: + + +* `run_metadata`: A `RunMetadata` protobuf object. +* `tag`: The tag name for this metadata. +* `global_step`: Number. Optional global step counter to record with the + StepStats. + +##### Raises: + + +* `ValueError`: If the provided tag was already used for this type of event. + + + +- - - + +#### `tf.train.SummaryWriter.flush()` {#SummaryWriter.flush} + +Flushes the event file to disk. + +Call this method to make sure that all pending events have been written to +disk. + + +- - - + +#### `tf.train.SummaryWriter.close()` {#SummaryWriter.close} + +Flushes the event file to disk and close the file. + +Call this method when you do not need the summary writer anymore. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.export_meta_graph.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.export_meta_graph.md new file mode 100644 index 0000000000..c09e6783c6 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.export_meta_graph.md @@ -0,0 +1,24 @@ +### `tf.train.export_meta_graph(filename=None, meta_info_def=None, graph_def=None, saver_def=None, collection_list=None, as_text=False)` {#export_meta_graph} + +Returns `MetaGraphDef` proto. Optionally writes it to filename. + +This function exports the graph, saver, and collection objects into +`MetaGraphDef` protocol buffer with the intension of it being imported +at a later time or location to restart training, run inference, or be +a subgraph. + +##### Args: + + +* `filename`: Optional filename including the path for writing the + generated `MetaGraphDef` protocol buffer. +* `meta_info_def`: `MetaInfoDef` protocol buffer. +* `graph_def`: `GraphDef` protocol buffer. +* `saver_def`: `SaverDef` protocol buffer. +* `collection_list`: List of string keys to collect. +* `as_text`: If `True`, writes the `MetaGraphDef` as an ASCII proto. + +##### Returns: + + A `MetaGraphDef` proto. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.range_input_producer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.range_input_producer.md deleted file mode 100644 index fa73440d88..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.range_input_producer.md +++ /dev/null @@ -1,25 +0,0 @@ -### `tf.train.range_input_producer(limit, num_epochs=None, shuffle=True, seed=None, capacity=32, shared_name=None, name=None)` {#range_input_producer} - -Produces the integers from 0 to limit-1 in a queue. - -##### Args: - - -* `limit`: An int32 scalar tensor. -* `num_epochs`: An integer (optional). If specified, `range_input_producer` - produces each integer `num_epochs` times before generating an - OutOfRange error. If not specified, `range_input_producer` can cycle - through the integers an unlimited number of times. -* `shuffle`: Boolean. If true, the integers are randomly shuffled within each - epoch. -* `seed`: An integer (optional). Seed used if shuffle == True. -* `capacity`: An integer. Sets the queue capacity. -* `shared_name`: (optional). If set, this queue will be shared under the given - name across multiple sessions. -* `name`: A name for the operations (optional). - -##### Returns: - - A Queue with the output integers. A `QueueRunner` for the Queue - is added to the current `Graph`'s `QUEUE_RUNNER` collection. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.shuffle_batch.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.shuffle_batch.md deleted file mode 100644 index bf2591801b..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.shuffle_batch.md +++ /dev/null @@ -1,74 +0,0 @@ -### `tf.train.shuffle_batch(tensors, batch_size, capacity, min_after_dequeue, num_threads=1, seed=None, enqueue_many=False, shapes=None, shared_name=None, name=None)` {#shuffle_batch} - -Creates batches by randomly shuffling tensors. - -This function adds the following to the current `Graph`: - -* A shuffling queue into which tensors from `tensors` are enqueued. -* A `dequeue_many` operation to create batches from the queue. -* A `QueueRunner` to `QUEUE_RUNNER` collection, to enqueue the tensors - from `tensors`. - -If `enqueue_many` is `False`, `tensors` is assumed to represent a -single example. An input tensor with shape `[x, y, z]` will be output -as a tensor with shape `[batch_size, x, y, z]`. - -If `enqueue_many` is `True`, `tensors` is assumed to represent a -batch of examples, where the first dimension is indexed by example, -and all members of `tensors` should have the same size in the -first dimension. If an input tensor has shape `[*, x, y, z]`, the -output will have shape `[batch_size, x, y, z]`. - -The `capacity` argument controls the how long the prefetching is allowed to -grow the queues. - -The returned operation is a dequeue operation and will throw -`tf.errors.OutOfRangeError` if the input queue is exhausted. If this -operation is feeding another input queue, its queue runner will catch -this exception, however, if this operation is used in your main thread -you are responsible for catching this yourself. - -For example: - -```python -# Creates batches of 32 images and 32 labels. -image_batch, label_batch = tf.train.shuffle_batch( - [single_image, single_label], - batch_size=32, - num_threads=4, - capacity=50000, - min_after_dequeue=10000) -``` - -*N.B.:* You must ensure that either (i) the `shapes` argument is -passed, or (ii) all of the tensors in `tensors` must have -fully-defined shapes. `ValueError` will be raised if neither of -these conditions holds. - -##### Args: - - -* `tensors`: The list or dictionary of tensors to enqueue. -* `batch_size`: The new batch size pulled from the queue. -* `capacity`: An integer. The maximum number of elements in the queue. -* `min_after_dequeue`: Minimum number elements in the queue after a - dequeue, used to ensure a level of mixing of elements. -* `num_threads`: The number of threads enqueuing `tensor_list`. -* `seed`: Seed for the random shuffling within the queue. -* `enqueue_many`: Whether each tensor in `tensor_list` is a single example. -* `shapes`: (Optional) The shapes for each example. Defaults to the - inferred shapes for `tensor_list`. -* `shared_name`: (Optional) If set, this queue will be shared under the given - name across multiple sessions. -* `name`: (Optional) A name for the operations. - -##### Returns: - - A list or dictionary of tensors with the types as `tensors`. - -##### Raises: - - -* `ValueError`: If the `shapes` are not specified, and cannot be - inferred from the elements of `tensors`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.shuffle_batch_join.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.shuffle_batch_join.md deleted file mode 100644 index ab9e1be4a3..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.shuffle_batch_join.md +++ /dev/null @@ -1,68 +0,0 @@ -### `tf.train.shuffle_batch_join(tensors_list, batch_size, capacity, min_after_dequeue, seed=None, enqueue_many=False, shapes=None, shared_name=None, name=None)` {#shuffle_batch_join} - -Create batches by randomly shuffling tensors. - -The `tensors_list` argument is a list of tuples of tensors, or a list of -dictionaries of tensors. Each element in the list is treated similarily -to the `tensors` argument of `tf.train.shuffle_batch()`. - -This version enqueues a different list of tensors in different threads. -It adds the following to the current `Graph`: - -* A shuffling queue into which tensors from `tensors_list` are enqueued. -* A `dequeue_many` operation to create batches from the queue. -* A `QueueRunner` to `QUEUE_RUNNER` collection, to enqueue the tensors - from `tensors_list`. - -`len(tensors_list)` threads will be started, with thread `i` enqueuing -the tensors from `tensors_list[i]`. `tensors_list[i1][j]` must match -`tensors_list[i2][j]` in type and shape, except in the first dimension if -`enqueue_many` is true. - -If `enqueue_many` is `False`, each `tensors_list[i]` is assumed -to represent a single example. An input tensor with shape `[x, y, z]` -will be output as a tensor with shape `[batch_size, x, y, z]`. - -If `enqueue_many` is `True`, `tensors_list[i]` is assumed to -represent a batch of examples, where the first dimension is indexed -by example, and all members of `tensors_list[i]` should have the -same size in the first dimension. If an input tensor has shape `[*, x, -y, z]`, the output will have shape `[batch_size, x, y, z]`. - -The `capacity` argument controls the how long the prefetching is allowed to -grow the queues. - -The returned operation is a dequeue operation and will throw -`tf.errors.OutOfRangeError` if the input queue is exhausted. If this -operation is feeding another input queue, its queue runner will catch -this exception, however, if this operation is used in your main thread -you are responsible for catching this yourself. - -##### Args: - - -* `tensors_list`: A list of tuples or dictionaries of tensors to enqueue. -* `batch_size`: An integer. The new batch size pulled from the queue. -* `capacity`: An integer. The maximum number of elements in the queue. -* `min_after_dequeue`: Minimum number elements in the queue after a - dequeue, used to ensure a level of mixing of elements. -* `seed`: Seed for the random shuffling within the queue. -* `enqueue_many`: Whether each tensor in `tensor_list_list` is a single - example. -* `shapes`: (Optional) The shapes for each example. Defaults to the - inferred shapes for `tensors_list[i]`. -* `shared_name`: (optional). If set, this queue will be shared under the given - name across multiple sessions. -* `name`: (Optional) A name for the operations. - -##### Returns: - - A list or dictionary of tensors with the same number and types as - `tensors_list[i]`. - -##### Raises: - - -* `ValueError`: If the `shapes` are not specified, and cannot be - inferred from the elements of `tensors_list`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.slice_input_producer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.slice_input_producer.md new file mode 100644 index 0000000000..da888d0fc2 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.slice_input_producer.md @@ -0,0 +1,35 @@ +### `tf.train.slice_input_producer(tensor_list, num_epochs=None, shuffle=True, seed=None, capacity=32, shared_name=None, name=None)` {#slice_input_producer} + +Produces a slice of each `Tensor` in `tensor_list`. + +Implemented using a Queue -- a `QueueRunner` for the Queue +is added to the current `Graph`'s `QUEUE_RUNNER` collection. + +##### Args: + + +* `tensor_list`: A list of `Tensor` objects. Every `Tensor` in + `tensor_list` must have the same size in the first dimension. +* `num_epochs`: An integer (optional). If specified, `slice_input_producer` + produces each slice `num_epochs` times before generating + an `OutOfRange` error. If not specified, `slice_input_producer` can cycle + through the slices an unlimited number of times. +* `shuffle`: Boolean. If true, the integers are randomly shuffled within each + epoch. +* `seed`: An integer (optional). Seed used if shuffle == True. +* `capacity`: An integer. Sets the queue capacity. +* `shared_name`: (optional). If set, this queue will be shared under the given + name across multiple sessions. +* `name`: A name for the operations (optional). + +##### Returns: + + A list of tensors, one for each element of `tensor_list`. If the tensor + in `tensor_list` has shape `[N, a, b, .., z]`, then the corresponding output + tensor will have shape `[a, b, ..., z]`. + +##### Raises: + + +* `ValueError`: if `slice_input_producer` produces nothing from `tensor_list`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.write_graph.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.write_graph.md deleted file mode 100644 index eea9025321..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.write_graph.md +++ /dev/null @@ -1,21 +0,0 @@ -### `tf.train.write_graph(graph_def, logdir, name, as_text=True)` {#write_graph} - -Writes a graph proto to a file. - -The graph is written as a binary proto unless `as_text` is `True`. - -```python -v = tf.Variable(0, name='my_variable') -sess = tf.Session() -tf.train.write_graph(sess.graph_def, '/tmp/my-model', 'train.pbtxt') -``` - -##### Args: - - -* `graph_def`: A `GraphDef` protocol buffer. -* `logdir`: Directory where to write the graph. This can refer to remote - filesystems, such as Google Cloud Storage (GCS). -* `name`: Filename for the graph. -* `as_text`: If `True`, writes the graph as an ASCII proto. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.truediv.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.truediv.md new file mode 100644 index 0000000000..0ccb1b2217 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.truediv.md @@ -0,0 +1,31 @@ +### `tf.truediv(x, y, name=None)` {#truediv} + +Divides x / y elementwise, always producing floating point results. + +The same as `tf.div` for floating point arguments, but casts integer arguments +to floating point before dividing so that the result is always floating point. +This op is generated by normal `x / y` division in Python 3 and in Python 2.7 +with `from __future__ import division`. If you want integer division that +rounds down, use `x // y` or `tf.floordiv`. + +`x` and `y` must have the same numeric type. If the inputs are floating +point, the output will have the same type. If the inputs are integral, the +inputs are cast to `float32` for `int8` and `int16` and `float64` for `int32` +and `int64` (matching the behavior of Numpy). + +##### Args: + + +* `x`: `Tensor` numerator of numeric type. +* `y`: `Tensor` denominator of numeric type. +* `name`: A name for the operation (optional). + +##### Returns: + + `x / y` evaluated in floating point. + +##### Raises: + + +* `TypeError`: If `x` and `y` have different dtypes. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.variable_op_scope.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.variable_op_scope.md new file mode 100644 index 0000000000..e3ab6e5d2e --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.variable_op_scope.md @@ -0,0 +1,56 @@ +### `tf.variable_op_scope(values, name_or_scope, default_name=None, initializer=None, regularizer=None, caching_device=None, partitioner=None, reuse=None)` {#variable_op_scope} + +Returns a context manager for defining an op that creates variables. + +This context manager validates that the given `values` are from the +same graph, ensures that graph is the default graph, and pushes a +name scope and a variable scope. + +If `name_or_scope` is not None, it is used as is in the variable scope. If +`scope` is None, then `default_name` is used. In that case, if the same name +has been previously used in the same scope, it will made unique be appending +`_N` to it. + +This is intended to be used when defining generic ops and so reuse is always +inherited. + +For example, to define a new Python op called `my_op_with_vars`: + +```python +def my_op_with_vars(a, b, scope=None): + with tf.variable_op_scope([a, b], scope, "MyOp") as scope: + a = tf.convert_to_tensor(a, name="a") + b = tf.convert_to_tensor(b, name="b") + c = tf.get_variable('c') + # Define some computation that uses `a`, `b`, and `c`. + return foo_op(..., name=scope) +``` + +##### Args: + + +* `values`: The list of `Tensor` arguments that are passed to the op function. +* `name_or_scope`: The name argument that is passed to the op function, + this name_or_scope is not uniquified in the variable scope. +* `default_name`: The default name to use if the `name_or_scope` argument is + `None`, this name will be uniquified. If name_or_scope is provided it + won't be used and therefore it is not required and can be None. +* `initializer`: The default initializer to pass to variable scope. +* `regularizer`: The default regularizer for variables within this scope. +* `caching_device`: The default caching device for variables within this scope. +* `partitioner`: The default partitioner for variables within this scope. +* `reuse`: `True` or `None`; if `True`, we go into reuse mode for this scope as + well as all sub-scopes; if `None`, we just inherit the parent scope reuse. + + +##### Returns: + + A context manager for use in defining a Python op. + +##### Raises: + + +* `ValueError`: when trying to reuse within a create scope, or create within + a reuse scope, or if reuse is not `None` or `True`. +* `TypeError`: when the types of some arguments are not appropriate. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.Graph.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.Graph.md new file mode 100644 index 0000000000..762a117664 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.Graph.md @@ -0,0 +1,783 @@ +A TensorFlow computation, represented as a dataflow graph. + +A `Graph` contains a set of +[`Operation`](../../api_docs/python/framework.md#Operation) objects, +which represent units of computation; and +[`Tensor`](../../api_docs/python/framework.md#Tensor) objects, which represent +the units of data that flow between operations. + +A default `Graph` is always registered, and accessible by calling +[`tf.get_default_graph()`](../../api_docs/python/framework.md#get_default_graph). +To add an operation to the default graph, simply call one of the functions +that defines a new `Operation`: + +``` +c = tf.constant(4.0) +assert c.graph is tf.get_default_graph() +``` + +Another typical usage involves the +[`Graph.as_default()`](../../api_docs/python/framework.md#Graph.as_default) +context manager, which overrides the current default graph for the +lifetime of the context: + +```python +g = tf.Graph() +with g.as_default(): + # Define operations and tensors in `g`. + c = tf.constant(30.0) + assert c.graph is g +``` + +Important note: This class *is not* thread-safe for graph construction. All +operations should be created from a single thread, or external +synchronization must be provided. Unless otherwise specified, all methods +are not thread-safe. + +- - - + +#### `tf.Graph.__init__()` {#Graph.__init__} + +Creates a new, empty Graph. + + +- - - + +#### `tf.Graph.as_default()` {#Graph.as_default} + +Returns a context manager that makes this `Graph` the default graph. + +This method should be used if you want to create multiple graphs +in the same process. For convenience, a global default graph is +provided, and all ops will be added to this graph if you do not +create a new graph explicitly. Use this method with the `with` keyword +to specify that ops created within the scope of a block should be +added to this graph. + +The default graph is a property of the current thread. If you +create a new thread, and wish to use the default graph in that +thread, you must explicitly add a `with g.as_default():` in that +thread's function. + +The following code examples are equivalent: + +```python +# 1. Using Graph.as_default(): +g = tf.Graph() +with g.as_default(): + c = tf.constant(5.0) + assert c.graph is g + +# 2. Constructing and making default: +with tf.Graph().as_default() as g: + c = tf.constant(5.0) + assert c.graph is g +``` + +##### Returns: + + A context manager for using this graph as the default graph. + + +- - - + +#### `tf.Graph.as_graph_def(from_version=None, add_shapes=False)` {#Graph.as_graph_def} + +Returns a serialized `GraphDef` representation of this graph. + +The serialized `GraphDef` can be imported into another `Graph` +(using [`import_graph_def()`](#import_graph_def)) or used with the +[C++ Session API](../../api_docs/cc/index.md). + +This method is thread-safe. + +##### Args: + + +* `from_version`: Optional. If this is set, returns a `GraphDef` + containing only the nodes that were added to this graph since + its `version` property had the given value. +* `add_shapes`: If true, adds an "_output_shapes" list attr to each + node with the inferred shapes of each of its outputs. + +##### Returns: + + A [`GraphDef`](https://www.tensorflow.org/code/tensorflow/core/framework/graph.proto) + protocol buffer. + +##### Raises: + + +* `ValueError`: If the `graph_def` would be too large. + + +- - - + +#### `tf.Graph.finalize()` {#Graph.finalize} + +Finalizes this graph, making it read-only. + +After calling `g.finalize()`, no new operations can be added to +`g`. This method is used to ensure that no operations are added +to a graph when it is shared between multiple threads, for example +when using a [`QueueRunner`](../../api_docs/python/train.md#QueueRunner). + + +- - - + +#### `tf.Graph.finalized` {#Graph.finalized} + +True if this graph has been finalized. + + + +- - - + +#### `tf.Graph.control_dependencies(control_inputs)` {#Graph.control_dependencies} + +Returns a context manager that specifies control dependencies. + +Use with the `with` keyword to specify that all operations constructed +within the context should have control dependencies on +`control_inputs`. For example: + +```python +with g.control_dependencies([a, b, c]): + # `d` and `e` will only run after `a`, `b`, and `c` have executed. + d = ... + e = ... +``` + +Multiple calls to `control_dependencies()` can be nested, and in +that case a new `Operation` will have control dependencies on the union +of `control_inputs` from all active contexts. + +```python +with g.control_dependencies([a, b]): + # Ops constructed here run after `a` and `b`. + with g.control_dependencies([c, d]): + # Ops constructed here run after `a`, `b`, `c`, and `d`. +``` + +You can pass None to clear the control dependencies: + +```python +with g.control_dependencies([a, b]): + # Ops constructed here run after `a` and `b`. + with g.control_dependencies(None): + # Ops constructed here run normally, not waiting for either `a` or `b`. + with g.control_dependencies([c, d]): + # Ops constructed here run after `c` and `d`, also not waiting + # for either `a` or `b`. +``` + +*N.B.* The control dependencies context applies *only* to ops that +are constructed within the context. Merely using an op or tensor +in the context does not add a control dependency. The following +example illustrates this point: + +```python +# WRONG +def my_func(pred, tensor): + t = tf.matmul(tensor, tensor) + with tf.control_dependencies([pred]): + # The matmul op is created outside the context, so no control + # dependency will be added. + return t + +# RIGHT +def my_func(pred, tensor): + with tf.control_dependencies([pred]): + # The matmul op is created in the context, so a control dependency + # will be added. + return tf.matmul(tensor, tensor) +``` + +##### Args: + + +* `control_inputs`: A list of `Operation` or `Tensor` objects which + must be executed or computed before running the operations + defined in the context. Can also be `None` to clear the control + dependencies. + +##### Returns: + + A context manager that specifies control dependencies for all + operations constructed within the context. + +##### Raises: + + +* `TypeError`: If `control_inputs` is not a list of `Operation` or + `Tensor` objects. + + +- - - + +#### `tf.Graph.device(device_name_or_function)` {#Graph.device} + +Returns a context manager that specifies the default device to use. + +The `device_name_or_function` argument may either be a device name +string, a device function, or None: + +* If it is a device name string, all operations constructed in + this context will be assigned to the device with that name, unless + overridden by a nested `device()` context. +* If it is a function, it will be treated as a function from + Operation objects to device name strings, and invoked each time + a new Operation is created. The Operation will be assigned to + the device with the returned name. +* If it is None, all `device()` invocations from the enclosing context + will be ignored. + +For information about the valid syntax of device name strings, see +the documentation in +[`DeviceNameUtils`](https://www.tensorflow.org/code/tensorflow/core/util/device_name_utils.h). + +For example: + +```python +with g.device('/gpu:0'): + # All operations constructed in this context will be placed + # on GPU 0. + with g.device(None): + # All operations constructed in this context will have no + # assigned device. + +# Defines a function from `Operation` to device string. +def matmul_on_gpu(n): + if n.type == "MatMul": + return "/gpu:0" + else: + return "/cpu:0" + +with g.device(matmul_on_gpu): + # All operations of type "MatMul" constructed in this context + # will be placed on GPU 0; all other operations will be placed + # on CPU 0. +``` + +**N.B.** The device scope may be overridden by op wrappers or +other library code. For example, a variable assignment op +`v.assign()` must be colocated with the `tf.Variable` `v`, and +incompatible device scopes will be ignored. + +##### Args: + + +* `device_name_or_function`: The device name or function to use in + the context. + +##### Returns: + + A context manager that specifies the default device to use for newly + created ops. + + +- - - + +#### `tf.Graph.name_scope(name)` {#Graph.name_scope} + +Returns a context manager that creates hierarchical names for operations. + +A graph maintains a stack of name scopes. A `with name_scope(...):` +statement pushes a new name onto the stack for the lifetime of the context. + +The `name` argument will be interpreted as follows: + +* A string (not ending with '/') will create a new name scope, in which + `name` is appended to the prefix of all operations created in the + context. If `name` has been used before, it will be made unique by + calling `self.unique_name(name)`. +* A scope previously captured from a `with g.name_scope(...) as + scope:` statement will be treated as an "absolute" name scope, which + makes it possible to re-enter existing scopes. +* A value of `None` or the empty string will reset the current name scope + to the top-level (empty) name scope. + +For example: + +```python +with tf.Graph().as_default() as g: + c = tf.constant(5.0, name="c") + assert c.op.name == "c" + c_1 = tf.constant(6.0, name="c") + assert c_1.op.name == "c_1" + + # Creates a scope called "nested" + with g.name_scope("nested") as scope: + nested_c = tf.constant(10.0, name="c") + assert nested_c.op.name == "nested/c" + + # Creates a nested scope called "inner". + with g.name_scope("inner"): + nested_inner_c = tf.constant(20.0, name="c") + assert nested_inner_c.op.name == "nested/inner/c" + + # Create a nested scope called "inner_1". + with g.name_scope("inner"): + nested_inner_1_c = tf.constant(30.0, name="c") + assert nested_inner_1_c.op.name == "nested/inner_1/c" + + # Treats `scope` as an absolute name scope, and + # switches to the "nested/" scope. + with g.name_scope(scope): + nested_d = tf.constant(40.0, name="d") + assert nested_d.op.name == "nested/d" + + with g.name_scope(""): + e = tf.constant(50.0, name="e") + assert e.op.name == "e" +``` + +The name of the scope itself can be captured by `with +g.name_scope(...) as scope:`, which stores the name of the scope +in the variable `scope`. This value can be used to name an +operation that represents the overall result of executing the ops +in a scope. For example: + +```python +inputs = tf.constant(...) +with g.name_scope('my_layer') as scope: + weights = tf.Variable(..., name="weights") + biases = tf.Variable(..., name="biases") + affine = tf.matmul(inputs, weights) + biases + output = tf.nn.relu(affine, name=scope) +``` + +##### Args: + + +* `name`: A name for the scope. + +##### Returns: + + A context manager that installs `name` as a new name scope. + + + +A `Graph` instance supports an arbitrary number of "collections" +that are identified by name. For convenience when building a large +graph, collections can store groups of related objects: for +example, the `tf.Variable` uses a collection (named +[`tf.GraphKeys.VARIABLES`](../../api_docs/python/framework.md#GraphKeys)) for +all variables that are created during the construction of a graph. The caller +may define additional collections by specifying a new name. + +- - - + +#### `tf.Graph.add_to_collection(name, value)` {#Graph.add_to_collection} + +Stores `value` in the collection with the given `name`. + +Note that collections are not sets, so it is possible to add a value to +a collection several times. + +##### Args: + + +* `name`: The key for the collection. The `GraphKeys` class + contains many standard names for collections. +* `value`: The value to add to the collection. + + +- - - + +#### `tf.Graph.add_to_collections(names, value)` {#Graph.add_to_collections} + +Stores `value` in the collections given by `names`. + +Note that collections are not sets, so it is possible to add a value to +a collection several times. This function makes sure that duplicates in +`names` are ignored, but it will not check for pre-existing membership of +`value` in any of the collections in `names`. + +`names` can be any iterable, but if `names` is a string, it is treated as a +single collection name. + +##### Args: + + +* `names`: The keys for the collections to add to. The `GraphKeys` class + contains many standard names for collections. +* `value`: The value to add to the collections. + + +- - - + +#### `tf.Graph.get_collection(name, scope=None)` {#Graph.get_collection} + +Returns a list of values in the collection with the given `name`. + +This is different from `get_collection_ref()` which always returns the +actual collection list if it exists in that it returns a new list each time +it is called. + +##### Args: + + +* `name`: The key for the collection. For example, the `GraphKeys` class + contains many standard names for collections. +* `scope`: (Optional.) If supplied, the resulting list is filtered to include + only items whose `name` attribute matches using `re.match`. Items + without a `name` attribute are never returned if a scope is supplied and + the choice or `re.match` means that a `scope` without special tokens + filters by prefix. + +##### Returns: + + The list of values in the collection with the given `name`, or + an empty list if no value has been added to that collection. The + list contains the values in the order under which they were + collected. + + +- - - + +#### `tf.Graph.get_collection_ref(name)` {#Graph.get_collection_ref} + +Returns a list of values in the collection with the given `name`. + +If the collection exists, this returns the list itself, which can +be modified in place to change the collection. If the collection does +not exist, it is created as an empty list and the list is returned. + +This is different from `get_collection()` which always returns a copy of +the collection list if it exists and never creates an empty collection. + +##### Args: + + +* `name`: The key for the collection. For example, the `GraphKeys` class + contains many standard names for collections. + +##### Returns: + + The list of values in the collection with the given `name`, or an empty + list if no value has been added to that collection. + + + +- - - + +#### `tf.Graph.as_graph_element(obj, allow_tensor=True, allow_operation=True)` {#Graph.as_graph_element} + +Returns the object referred to by `obj`, as an `Operation` or `Tensor`. + +This function validates that `obj` represents an element of this +graph, and gives an informative error message if it is not. + +This function is the canonical way to get/validate an object of +one of the allowed types from an external argument reference in the +Session API. + +This method may be called concurrently from multiple threads. + +##### Args: + + +* `obj`: A `Tensor`, an `Operation`, or the name of a tensor or operation. + Can also be any object with an `_as_graph_element()` method that returns + a value of one of these types. +* `allow_tensor`: If true, `obj` may refer to a `Tensor`. +* `allow_operation`: If true, `obj` may refer to an `Operation`. + +##### Returns: + + The `Tensor` or `Operation` in the Graph corresponding to `obj`. + +##### Raises: + + +* `TypeError`: If `obj` is not a type we support attempting to convert + to types. +* `ValueError`: If `obj` is of an appropriate type but invalid. For + example, an invalid string. +* `KeyError`: If `obj` is not an object in the graph. + + +- - - + +#### `tf.Graph.get_operation_by_name(name)` {#Graph.get_operation_by_name} + +Returns the `Operation` with the given `name`. + +This method may be called concurrently from multiple threads. + +##### Args: + + +* `name`: The name of the `Operation` to return. + +##### Returns: + + The `Operation` with the given `name`. + +##### Raises: + + +* `TypeError`: If `name` is not a string. +* `KeyError`: If `name` does not correspond to an operation in this graph. + + +- - - + +#### `tf.Graph.get_tensor_by_name(name)` {#Graph.get_tensor_by_name} + +Returns the `Tensor` with the given `name`. + +This method may be called concurrently from multiple threads. + +##### Args: + + +* `name`: The name of the `Tensor` to return. + +##### Returns: + + The `Tensor` with the given `name`. + +##### Raises: + + +* `TypeError`: If `name` is not a string. +* `KeyError`: If `name` does not correspond to a tensor in this graph. + + +- - - + +#### `tf.Graph.get_operations()` {#Graph.get_operations} + +Return the list of operations in the graph. + +You can modify the operations in place, but modifications +to the list such as inserts/delete have no effect on the +list of operations known to the graph. + +This method may be called concurrently from multiple threads. + +##### Returns: + + A list of Operations. + + + +- - - + +#### `tf.Graph.seed` {#Graph.seed} + +The graph-level random seed of this graph. + + +- - - + +#### `tf.Graph.unique_name(name, mark_as_used=True)` {#Graph.unique_name} + +Return a unique operation name for `name`. + +Note: You rarely need to call `unique_name()` directly. Most of +the time you just need to create `with g.name_scope()` blocks to +generate structured names. + +`unique_name` is used to generate structured names, separated by +`"/"`, to help identify operations when debugging a graph. +Operation names are displayed in error messages reported by the +TensorFlow runtime, and in various visualization tools such as +TensorBoard. + +If `mark_as_used` is set to `True`, which is the default, a new +unique name is created and marked as in use. If it's set to `False`, +the unique name is returned without actually being marked as used. +This is useful when the caller simply wants to know what the name +to be created will be. + +##### Args: + + +* `name`: The name for an operation. +* `mark_as_used`: Whether to mark this name as being used. + +##### Returns: + + A string to be passed to `create_op()` that will be used + to name the operation being created. + + +- - - + +#### `tf.Graph.version` {#Graph.version} + +Returns a version number that increases as ops are added to the graph. + +Note that this is unrelated to the +[GraphDef version](#Graph.graph_def_version). + + +- - - + +#### `tf.Graph.graph_def_versions` {#Graph.graph_def_versions} + +The GraphDef version information of this graph. + +For details on the meaning of each version, see [`GraphDef`] +(https://www.tensorflow.org/code/tensorflow/core/framework/graph.proto). + +##### Returns: + + A `VersionDef`. + + + +- - - + +#### `tf.Graph.create_op(op_type, inputs, dtypes, input_types=None, name=None, attrs=None, op_def=None, compute_shapes=True, compute_device=True)` {#Graph.create_op} + +Creates an `Operation` in this graph. + +This is a low-level interface for creating an `Operation`. Most +programs will not call this method directly, and instead use the +Python op constructors, such as `tf.constant()`, which add ops to +the default graph. + +##### Args: + + +* `op_type`: The `Operation` type to create. This corresponds to the + `OpDef.name` field for the proto that defines the operation. +* `inputs`: A list of `Tensor` objects that will be inputs to the `Operation`. +* `dtypes`: A list of `DType` objects that will be the types of the tensors + that the operation produces. +* `input_types`: (Optional.) A list of `DType`s that will be the types of + the tensors that the operation consumes. By default, uses the base + `DType` of each input in `inputs`. Operations that expect + reference-typed inputs must specify `input_types` explicitly. +* `name`: (Optional.) A string name for the operation. If not specified, a + name is generated based on `op_type`. +* `attrs`: (Optional.) A dictionary where the key is the attribute name (a + string) and the value is the respective `attr` attribute of the + `NodeDef` proto that will represent the operation (an `AttrValue` + proto). +* `op_def`: (Optional.) The `OpDef` proto that describes the `op_type` that + the operation will have. +* `compute_shapes`: (Optional.) If True, shape inference will be performed + to compute the shapes of the outputs. +* `compute_device`: (Optional.) If True, device functions will be executed + to compute the device property of the Operation. + +##### Raises: + + +* `TypeError`: if any of the inputs is not a `Tensor`. +* `ValueError`: if colocation conflicts with existing device assignment. + +##### Returns: + + An `Operation` object. + + +- - - + +#### `tf.Graph.gradient_override_map(op_type_map)` {#Graph.gradient_override_map} + +EXPERIMENTAL: A context manager for overriding gradient functions. + +This context manager can be used to override the gradient function +that will be used for ops within the scope of the context. + +For example: + +```python +@tf.RegisterGradient("CustomSquare") +def _custom_square_grad(op, grad): + # ... + +with tf.Graph().as_default() as g: + c = tf.constant(5.0) + s_1 = tf.square(c) # Uses the default gradient for tf.square. + with g.gradient_override_map({"Square": "CustomSquare"}): + s_2 = tf.square(s_2) # Uses _custom_square_grad to compute the + # gradient of s_2. +``` + +##### Args: + + +* `op_type_map`: A dictionary mapping op type strings to alternative op + type strings. + +##### Returns: + + A context manager that sets the alternative op type to be used for one + or more ops created in that context. + +##### Raises: + + +* `TypeError`: If `op_type_map` is not a dictionary mapping strings to + strings. + + + +#### Other Methods +- - - + +#### `tf.Graph.colocate_with(op, ignore_existing=False)` {#Graph.colocate_with} + +Returns a context manager that specifies an op to colocate with. + +Note: this function is not for public use, only for internal libraries. + +For example: + +```python +a = tf.Variable([1.0]) +with g.colocate_with(a): + b = tf.constant(1.0) + c = tf.add(a, b) +``` + +`b` and `c` will always be colocated with `a`, no matter where `a` +is eventually placed. + +##### Args: + + +* `op`: The op to colocate all created ops with. +* `ignore_existing`: If true, only applies colocation of this op within + the context, rather than applying all colocation properties + on the stack. + +##### Raises: + + +* `ValueError`: if op is None. + +##### Yields: + + A context manager that specifies the op with which to colocate + newly created ops. + + +- - - + +#### `tf.Graph.get_all_collection_keys()` {#Graph.get_all_collection_keys} + +Returns a list of collections used in this graph. + + +- - - + +#### `tf.Graph.is_feedable(tensor)` {#Graph.is_feedable} + +Returns `True` if and only if `tensor` is feedable. + + +- - - + +#### `tf.Graph.prevent_feeding(tensor)` {#Graph.prevent_feeding} + +Marks the given `tensor` as unfeedable in this graph. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.IndexedSlices.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.IndexedSlices.md new file mode 100644 index 0000000000..435a178205 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.IndexedSlices.md @@ -0,0 +1,93 @@ +A sparse representation of a set of tensor slices at given indices. + +This class is a simple wrapper for a pair of `Tensor` objects: + +* `values`: A `Tensor` of any dtype with shape `[D0, D1, ..., Dn]`. +* `indices`: A 1-D integer `Tensor` with shape `[D0]`. + +An `IndexedSlices` is typically used to represent a subset of a larger +tensor `dense` of shape `[LARGE0, D1, .. , DN]` where `LARGE0 >> D0`. +The values in `indices` are the indices in the first dimension of +the slices that have been extracted from the larger tensor. + +The dense tensor `dense` represented by an `IndexedSlices` `slices` has + +```python +dense[slices.indices[i], :, :, :, ...] = slices.values[i, :, :, :, ...] +``` + +The `IndexedSlices` class is used principally in the definition of +gradients for operations that have sparse gradients +(e.g. [`tf.gather`](../../api_docs/python/array_ops.md#gather)). + +Contrast this representation with +[`SparseTensor`](../../api_docs/python/sparse_ops.md#SparseTensor), +which uses multi-dimensional indices and scalar values. + +- - - + +#### `tf.IndexedSlices.__init__(values, indices, dense_shape=None)` {#IndexedSlices.__init__} + +Creates an `IndexedSlices`. + + + +- - - + +#### `tf.IndexedSlices.values` {#IndexedSlices.values} + +A `Tensor` containing the values of the slices. + + +- - - + +#### `tf.IndexedSlices.indices` {#IndexedSlices.indices} + +A 1-D `Tensor` containing the indices of the slices. + + +- - - + +#### `tf.IndexedSlices.dense_shape` {#IndexedSlices.dense_shape} + +A 1-D `Tensor` containing the shape of the corresponding dense tensor. + + + +- - - + +#### `tf.IndexedSlices.name` {#IndexedSlices.name} + +The name of this `IndexedSlices`. + + +- - - + +#### `tf.IndexedSlices.dtype` {#IndexedSlices.dtype} + +The `DType` of elements in this tensor. + + +- - - + +#### `tf.IndexedSlices.device` {#IndexedSlices.device} + +The name of the device on which `values` will be produced, or `None`. + + +- - - + +#### `tf.IndexedSlices.op` {#IndexedSlices.op} + +The `Operation` that produces `values` as an output. + + + +#### Other Methods +- - - + +#### `tf.IndexedSlices.graph` {#IndexedSlices.graph} + +The `Graph` that contains the values, indices, and shape tensors. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.SparseTensorValue.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.SparseTensorValue.md new file mode 100644 index 0000000000..efa3314f23 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.SparseTensorValue.md @@ -0,0 +1,22 @@ +SparseTensorValue(indices, values, shape) +- - - + +#### `tf.SparseTensorValue.indices` {#SparseTensorValue.indices} + +Alias for field number 0 + + +- - - + +#### `tf.SparseTensorValue.shape` {#SparseTensorValue.shape} + +Alias for field number 2 + + +- - - + +#### `tf.SparseTensorValue.values` {#SparseTensorValue.values} + +Alias for field number 1 + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.WholeFileReader.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.WholeFileReader.md new file mode 100644 index 0000000000..e168cabc9e --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.WholeFileReader.md @@ -0,0 +1,148 @@ +A Reader that outputs the entire contents of a file as a value. + +To use, enqueue filenames in a Queue. The output of Read will +be a filename (key) and the contents of that file (value). + +See ReaderBase for supported methods. +- - - + +#### `tf.WholeFileReader.__init__(name=None)` {#WholeFileReader.__init__} + +Create a WholeFileReader. + +##### Args: + + +* `name`: A name for the operation (optional). + + +- - - + +#### `tf.WholeFileReader.num_records_produced(name=None)` {#WholeFileReader.num_records_produced} + +Returns the number of records this reader has produced. + +This is the same as the number of Read executions that have +succeeded. + +##### Args: + + +* `name`: A name for the operation (optional). + +##### Returns: + + An int64 Tensor. + + +- - - + +#### `tf.WholeFileReader.num_work_units_completed(name=None)` {#WholeFileReader.num_work_units_completed} + +Returns the number of work units this reader has finished processing. + +##### Args: + + +* `name`: A name for the operation (optional). + +##### Returns: + + An int64 Tensor. + + +- - - + +#### `tf.WholeFileReader.read(queue, name=None)` {#WholeFileReader.read} + +Returns the next record (key, value pair) produced by a reader. + +Will dequeue a work unit from queue if necessary (e.g. when the +Reader needs to start reading from a new file since it has +finished with the previous file). + +##### Args: + + +* `queue`: A Queue or a mutable string Tensor representing a handle + to a Queue, with string work items. +* `name`: A name for the operation (optional). + +##### Returns: + + A tuple of Tensors (key, value). + +* `key`: A string scalar Tensor. +* `value`: A string scalar Tensor. + + +- - - + +#### `tf.WholeFileReader.reader_ref` {#WholeFileReader.reader_ref} + +Op that implements the reader. + + +- - - + +#### `tf.WholeFileReader.reset(name=None)` {#WholeFileReader.reset} + +Restore a reader to its initial clean state. + +##### Args: + + +* `name`: A name for the operation (optional). + +##### Returns: + + The created Operation. + + +- - - + +#### `tf.WholeFileReader.restore_state(state, name=None)` {#WholeFileReader.restore_state} + +Restore a reader to a previously saved state. + +Not all Readers support being restored, so this can produce an +Unimplemented error. + +##### Args: + + +* `state`: A string Tensor. + Result of a SerializeState of a Reader with matching type. +* `name`: A name for the operation (optional). + +##### Returns: + + The created Operation. + + +- - - + +#### `tf.WholeFileReader.serialize_state(name=None)` {#WholeFileReader.serialize_state} + +Produce a string tensor that encodes the state of a reader. + +Not all Readers support being serialized, so this can produce an +Unimplemented error. + +##### Args: + + +* `name`: A name for the operation (optional). + +##### Returns: + + A string Tensor. + + +- - - + +#### `tf.WholeFileReader.supports_serialize` {#WholeFileReader.supports_serialize} + +Whether the Reader implementation can serialize its state. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.add_n.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.add_n.md new file mode 100644 index 0000000000..c214a46057 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.add_n.md @@ -0,0 +1,15 @@ +### `tf.add_n(inputs, name=None)` {#add_n} + +Add all input tensors element wise. + +##### Args: + + +* `inputs`: A list of at least 1 `Tensor` objects of the same type in: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`. + Must all be the same size and shape. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `inputs`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.assert_rank_at_least.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.assert_rank_at_least.md new file mode 100644 index 0000000000..1b33f3401b --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.assert_rank_at_least.md @@ -0,0 +1,37 @@ +### `tf.assert_rank_at_least(x, rank, data=None, summarize=None, name=None)` {#assert_rank_at_least} + +Assert `x` has rank equal to `rank` or higher. + +Example of adding a dependency to an operation: + +```python +with tf.control_dependencies([tf.assert_rank_at_least(x, 2)]): + output = tf.reduce_sum(x) +``` + +Example of adding dependency to the tensor being checked: + +```python +x = tf.with_dependencies([tf.assert_rank_at_least(x, 2)], x) +``` + +##### Args: + + +* `x`: Numeric `Tensor`. +* `rank`: Scalar `Tensor`. +* `data`: The tensors to print out if the condition is False. Defaults to + error message and first few entries of `x`. +* `summarize`: Print this many entries of each tensor. +* `name`: A name for this operation (optional). + Defaults to "assert_rank_at_least". + +##### Returns: + + Op raising `InvalidArgumentError` unless `x` has specified rank or higher. + +##### Raises: + + +* `ValueError`: If static checks determine `x` has wrong rank. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.assert_variables_initialized.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.assert_variables_initialized.md new file mode 100644 index 0000000000..ef61848aa8 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.assert_variables_initialized.md @@ -0,0 +1,24 @@ +### `tf.assert_variables_initialized(var_list=None)` {#assert_variables_initialized} + +Returns an Op to check if variables are initialized. + +NOTE: This function is obsolete and will be removed in 6 months. Please +change your implementation to use `report_uninitialized_variables()`. + +When run, the returned Op will raise the exception `FailedPreconditionError` +if any of the variables has not yet been initialized. + +Note: This function is implemented by trying to fetch the values of the +variables. If one of the variables is not initialized a message may be +logged by the C++ runtime. This is expected. + +##### Args: + + +* `var_list`: List of `Variable` objects to check. Defaults to the + value of `all_variables().` + +##### Returns: + + An Op, or None if there are no variables. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_cholesky.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_cholesky.md new file mode 100644 index 0000000000..487680f50b --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_cholesky.md @@ -0,0 +1,20 @@ +### `tf.batch_cholesky(input, name=None)` {#batch_cholesky} + +Calculates the Cholesky decomposition of a batch of square matrices. + +The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions +form square matrices, with the same constraints as the single matrix Cholesky +decomposition above. The output is a tensor of the same shape as the input +containing the Cholesky decompositions for all input submatrices `[..., :, :]`. + +##### Args: + + +* `input`: A `Tensor`. Must be one of the following types: `float64`, `float32`. + Shape is `[..., M, M]`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `input`. Shape is `[..., M, M]`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_fft2d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_fft2d.md deleted file mode 100644 index e7a2c7b943..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_fft2d.md +++ /dev/null @@ -1,18 +0,0 @@ -### `tf.batch_fft2d(input, name=None)` {#batch_fft2d} - -Compute the 2-dimensional discrete Fourier Transform over the inner-most - -2 dimensions of `input`. - -##### Args: - - -* `input`: A `Tensor` of type `complex64`. A complex64 tensor. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `complex64`. - A complex64 tensor of the same shape as `input`. The inner-most 2 - dimensions of `input` are replaced with their 2D Fourier Transform. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_fft3d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_fft3d.md new file mode 100644 index 0000000000..10c2ea3bf6 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_fft3d.md @@ -0,0 +1,18 @@ +### `tf.batch_fft3d(input, name=None)` {#batch_fft3d} + +Compute the 3-dimensional discrete Fourier Transform over the inner-most 3 + +dimensions of `input`. + +##### Args: + + +* `input`: A `Tensor` of type `complex64`. A complex64 tensor. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `complex64`. + A complex64 tensor of the same shape as `input`. The inner-most 3 + dimensions of `input` are replaced with their 3D Fourier Transform. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_ifft.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_ifft.md deleted file mode 100644 index c4b865425b..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_ifft.md +++ /dev/null @@ -1,18 +0,0 @@ -### `tf.batch_ifft(input, name=None)` {#batch_ifft} - -Compute the inverse 1-dimensional discrete Fourier Transform over the inner-most - -dimension of `input`. - -##### Args: - - -* `input`: A `Tensor` of type `complex64`. A complex64 tensor. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `complex64`. - A complex64 tensor of the same shape as `input`. The inner-most - dimension of `input` is replaced with its inverse 1D Fourier Transform. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_matrix_band_part.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_matrix_band_part.md deleted file mode 100644 index d9c208a460..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_matrix_band_part.md +++ /dev/null @@ -1,60 +0,0 @@ -### `tf.batch_matrix_band_part(input, num_lower, num_upper, name=None)` {#batch_matrix_band_part} - -Copy a tensor setting everything outside a central band in each innermost matrix - -to zero. - -The `band` part is computed as follows: -Assume `input` has `k` dimensions `[I, J, K, ..., M, N]`, then the output is a -tensor with the same shape where - -`band[i, j, k, ..., m, n] = in_band(m, n) * input[i, j, k, ..., m, n]`. - -The indicator function 'in_band(m, n)` is one if -`(num_lower < 0 || (m-n) <= num_lower)) && -(num_upper < 0 || (n-m) <= num_upper)`, and zero otherwise. - -For example: - -```prettyprint -# if 'input' is [[ 0, 1, 2, 3] - [-1, 0, 1, 2] - [-2, -1, 0, 1] - [-3, -2, -1, 0]], - -tf.batch_matrix_band_part(input, 1, -1) ==> [[ 0, 1, 2, 3] - [-1, 0, 1, 2] - [ 0, -1, 0, 1] - [ 0, 0, -1, 0]], - -tf.batch_matrix_band_part(input, 2, 1) ==> [[ 0, 1, 0, 0] - [-1, 0, 1, 0] - [-2, -1, 0, 1] - [ 0, -2, -1, 0]] -``` - -Useful special cases: - -```prettyprint - tf.batch_matrix_band_part(input, 0, -1) ==> Upper triangular part. - tf.batch_matrix_band_part(input, -1, 0) ==> Lower triangular part. - tf.batch_matrix_band_part(input, 0, 0) ==> Diagonal. -``` - -##### Args: - - -* `input`: A `Tensor`. Rank `k` tensor. -* `num_lower`: A `Tensor` of type `int64`. - 0-D tensor. Number of subdiagonals to keep. If negative, keep entire - lower triangle. -* `num_upper`: A `Tensor` of type `int64`. - 0-D tensor. Number of superdiagonals to keep. If negative, keep - entire upper triangle. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `input`. - Rank `k` tensor of the same shape as input. The extracted banded tensor. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_matrix_triangular_solve.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_matrix_triangular_solve.md new file mode 100644 index 0000000000..297e19088d --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.batch_matrix_triangular_solve.md @@ -0,0 +1,39 @@ +### `tf.batch_matrix_triangular_solve(matrix, rhs, lower=None, adjoint=None, name=None)` {#batch_matrix_triangular_solve} + +Solves systems of linear equations with upper or lower triangular matrices by + +backsubstitution. + +`matrix` is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions form +square matrices. If `lower` is `True` then the strictly upper triangular part +of each inner-most matrix is assumed to be zero and not accessed. +If `lower` is False then the strictly lower triangular part of each inner-most +matrix is assumed to be zero and not accessed. +`rhs` is a tensor of shape [..., M, K]`. + +The output is a tensor of shape `[..., M, K]`. If `adjoint` is `True` then the +innermost matrices in output` satisfy matrix equations +`matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]`. +If `adjoint` is `False` then the strictly then the innermost matrices in +`output` satisfy matrix equations +`adjoint(matrix[..., i, k]) * output[..., k, j] = rhs[..., i, j]`. + +##### Args: + + +* `matrix`: A `Tensor`. Must be one of the following types: `float64`, `float32`. + Shape is `[..., M, M]`. +* `rhs`: A `Tensor`. Must have the same type as `matrix`. + Shape is `[..., M, K]`. +* `lower`: An optional `bool`. Defaults to `True`. + Boolean indicating whether the innermost matrices in `matrix` are + lower or upper triangular. +* `adjoint`: An optional `bool`. Defaults to `False`. + Boolean indicating whether to solve with `matrix` or its (block-wise) + adjoint. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `matrix`. Shape is `[..., M, K]`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.bytes.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.bytes.md deleted file mode 100644 index 5353507e39..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.bytes.md +++ /dev/null @@ -1,4 +0,0 @@ -str(object='') -> string - -Return a nice string representation of the object. -If the argument is a string, the return value is the same object. diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.cast.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.cast.md new file mode 100644 index 0000000000..9571f87afe --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.cast.md @@ -0,0 +1,30 @@ +### `tf.cast(x, dtype, name=None)` {#cast} + +Casts a tensor to a new type. + +The operation casts `x` (in case of `Tensor`) or `x.values` +(in case of `SparseTensor`) to `dtype`. + +For example: + +```python +# tensor `a` is [1.8, 2.2], dtype=tf.float +tf.cast(a, tf.int32) ==> [1, 2] # dtype=tf.int32 +``` + +##### Args: + + +* `x`: A `Tensor` or `SparseTensor`. +* `dtype`: The destination type. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` or `SparseTensor` with same shape as `x`. + +##### Raises: + + +* `TypeError`: If `x` cannot be cast to the `dtype`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.complex.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.complex.md new file mode 100644 index 0000000000..55487ea170 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.complex.md @@ -0,0 +1,30 @@ +### `tf.complex(real, imag, name=None)` {#complex} + +Converts two real numbers to a complex number. + +Given a tensor `real` representing the real part of a complex number, and a +tensor `imag` representing the imaginary part of a complex number, this +operation returns complex numbers elementwise of the form \(a + bj\), where +*a* represents the `real` part and *b* represents the `imag` part. + +The input tensors `real` and `imag` must have the same shape. + +For example: + +``` +# tensor 'real' is [2.25, 3.25] +# tensor `imag` is [4.75, 5.75] +tf.complex(real, imag) ==> [[2.25 + 4.75j], [3.25 + 5.75j]] +``` + +##### Args: + + +* `real`: A `Tensor`. Must be one of the following types: `float32`, `float64`. +* `imag`: A `Tensor`. Must have the same type as `real`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `complex64` or `complex128`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.cond.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.cond.md deleted file mode 100644 index 6e6a9a69bf..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.cond.md +++ /dev/null @@ -1,54 +0,0 @@ -### `tf.cond(pred, fn1, fn2, name=None)` {#cond} - -Return either fn1() or fn2() based on the boolean predicate `pred`. - -`fn1` and `fn2` both return lists of output tensors. `fn1` and `fn2` must have -the same non-zero number and type of outputs. - -Note that the conditional execution applies only to the operations defined in -fn1 and fn2. Consider the following simple program: - -```python -z = tf.mul(a, b) -result = tf.cond(x < y, lambda: tf.add(x, z), lambda: tf.square(y)) -``` - -If x < y, the tf.add operation will be executed and tf.square -operation will not be executed. Since z is needed for at least one -branch of the cond, the tf.mul operation is always executed, unconditionally. -Although this behavior is consistent with the dataflow model of TensorFlow, -it has occasionally surprised some users who expected a lazier semantics. - -##### Args: - - -* `pred`: A scalar determining whether to return the result of `fn1` or `fn2`. -* `fn1`: The callable to be performed if pred is true. -* `fn2`: The callable to be performed if pref is false. -* `name`: Optional name prefix for the returned tensors. - -##### Returns: - - Tensors returned by the call to either `fn1` or `fn2`. If the callables - return a singleton list, the element is extracted from the list. - -##### Raises: - - -* `TypeError`: if `fn1` or `fn2` is not callable. -* `ValueError`: if `fn1` and `fn2` do not return the same number of tensors, or - return tensors of different types. - - -* `Example`: - -```python - x = tf.constant(2) - y = tf.constant(5) - def f1(): return tf.mul(x, 17) - def f2(): return tf.add(y, 23) - r = cond(tf.less(x, y), f1, f2) - # r is set to f1(). - # Operations in f2 (e.g., tf.add) are not executed. -``` - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.DiscreteDistribution.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.DiscreteDistribution.md new file mode 100644 index 0000000000..6e78e38ebe --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.DiscreteDistribution.md @@ -0,0 +1,139 @@ +Base class for discrete probability distributions. + +`DiscreteDistribution` defines the API for the likelihood functions `pmf` and +`log_pmf` of discrete probability distributions. + +Subclasses must override both `pmf` and `log_pmf` but one can call this base +class's implementation. + +See `BaseDistribution` for more information on the API for probability +distributions. +- - - + +#### `tf.contrib.distributions.DiscreteDistribution.batch_shape(name=None)` {#DiscreteDistribution.batch_shape} + +Batch dimensions of this instance as a 1-D int32 `Tensor`. + +The product of the dimensions of the `batch_shape` is the number of +independent distributions of this kind the instance represents. + +##### Args: + + +* `name`: name to give to the op + +##### Returns: + + `Tensor` `batch_shape` + + +- - - + +#### `tf.contrib.distributions.DiscreteDistribution.cdf(value, name='cdf')` {#DiscreteDistribution.cdf} + +Cumulative distribution function. + + +- - - + +#### `tf.contrib.distributions.DiscreteDistribution.dtype` {#DiscreteDistribution.dtype} + +dtype of samples from this distribution. + + +- - - + +#### `tf.contrib.distributions.DiscreteDistribution.entropy(name=None)` {#DiscreteDistribution.entropy} + +Entropy of the distribution in nats. + + +- - - + +#### `tf.contrib.distributions.DiscreteDistribution.event_shape(name=None)` {#DiscreteDistribution.event_shape} + +Shape of a sample from a single distribution as a 1-D int32 `Tensor`. + +##### Args: + + +* `name`: name to give to the op + +##### Returns: + + `Tensor` `event_shape` + + +- - - + +#### `tf.contrib.distributions.DiscreteDistribution.get_batch_shape()` {#DiscreteDistribution.get_batch_shape} + +`TensorShape` available at graph construction time. + +Same meaning as `batch_shape`. May be only partially defined. + + +- - - + +#### `tf.contrib.distributions.DiscreteDistribution.get_event_shape()` {#DiscreteDistribution.get_event_shape} + +`TensorShape` available at graph construction time. + +Same meaning as `event_shape`. May be only partially defined. + + +- - - + +#### `tf.contrib.distributions.DiscreteDistribution.log_cdf(value, name='log_cdf')` {#DiscreteDistribution.log_cdf} + +Log CDF. + + +- - - + +#### `tf.contrib.distributions.DiscreteDistribution.log_pmf(value, name='log_pmf')` {#DiscreteDistribution.log_pmf} + +Log of the probability mass function. + + +- - - + +#### `tf.contrib.distributions.DiscreteDistribution.mean` {#DiscreteDistribution.mean} + + + + +- - - + +#### `tf.contrib.distributions.DiscreteDistribution.name` {#DiscreteDistribution.name} + +Name to prepend to all ops. + + +- - - + +#### `tf.contrib.distributions.DiscreteDistribution.pmf(value, name='pmf')` {#DiscreteDistribution.pmf} + +Probability mass function. + + +- - - + +#### `tf.contrib.distributions.DiscreteDistribution.sample(n, seed=None, name=None)` {#DiscreteDistribution.sample} + +Generate `n` samples. + +##### Args: + + +* `n`: scalar. Number of samples to draw from each distribution. +* `seed`: Python integer seed for RNG +* `name`: name to give to the op. + +##### Returns: + + +* `samples`: a `Tensor` of shape `(n,) + self.batch_shape + self.event_shape` + with values of type `self.dtype`. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.Uniform.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.Uniform.md new file mode 100644 index 0000000000..ad6008c9f6 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.Uniform.md @@ -0,0 +1,216 @@ +Uniform distribution with `a` and `b` parameters. + +The PDF of this distribution is constant between [`a`, `b`], and 0 elsewhere. +- - - + +#### `tf.contrib.distributions.Uniform.__init__(a=0.0, b=1.0, name='Uniform')` {#Uniform.__init__} + +Construct Uniform distributions with `a` and `b`. + +The parameters `a` and `b` must be shaped in a way that supports +broadcasting (e.g. `b - a` is a valid operation). + +Here are examples without broadcasting: + +```python +# Without broadcasting +u1 = Uniform(3.0, 4.0) # a single uniform distribution [3, 4] +u2 = Uniform([1.0, 2.0], [3.0, 4.0]) # 2 distributions [1, 3], [2, 4] +u3 = Uniform([[1.0, 2.0], + [3.0, 4.0]], + [[1.5, 2.5], + [3.5, 4.5]]) # 4 distributions +``` + +And with broadcasting: + +```python +u1 = Uniform(3.0, [5.0, 6.0, 7.0]) # 3 distributions +``` + +##### Args: + + +* `a`: `float` or `double` tensor, the minimum endpoint. +* `b`: `float` or `double` tensor, the maximum endpoint. Must be > `a`. +* `name`: The name to prefix Ops created by this distribution class. + +##### Raises: + + +* `InvalidArgumentError`: if `a >= b`. + + +- - - + +#### `tf.contrib.distributions.Uniform.a` {#Uniform.a} + + + + +- - - + +#### `tf.contrib.distributions.Uniform.b` {#Uniform.b} + + + + +- - - + +#### `tf.contrib.distributions.Uniform.batch_shape(name='batch_shape')` {#Uniform.batch_shape} + + + + +- - - + +#### `tf.contrib.distributions.Uniform.cdf(x, name='cdf')` {#Uniform.cdf} + +CDF of observations in `x` under these Uniform distribution(s). + +##### Args: + + +* `x`: tensor of dtype `dtype`, must be broadcastable with `a` and `b`. +* `name`: The name to give this op. + +##### Returns: + + +* `cdf`: tensor of dtype `dtype`, the CDFs of `x`. If `x` is `nan`, will + return `nan`. + + +- - - + +#### `tf.contrib.distributions.Uniform.dtype` {#Uniform.dtype} + + + + +- - - + +#### `tf.contrib.distributions.Uniform.entropy(name='entropy')` {#Uniform.entropy} + +The entropy of Uniform distribution(s). + +##### Args: + + +* `name`: The name to give this op. + +##### Returns: + + +* `entropy`: tensor of dtype `dtype`, the entropy. + + +- - - + +#### `tf.contrib.distributions.Uniform.event_shape(name='event_shape')` {#Uniform.event_shape} + + + + +- - - + +#### `tf.contrib.distributions.Uniform.get_batch_shape()` {#Uniform.get_batch_shape} + + + + +- - - + +#### `tf.contrib.distributions.Uniform.get_event_shape()` {#Uniform.get_event_shape} + + + + +- - - + +#### `tf.contrib.distributions.Uniform.is_reparameterized` {#Uniform.is_reparameterized} + + + + +- - - + +#### `tf.contrib.distributions.Uniform.log_cdf(x, name='log_cdf')` {#Uniform.log_cdf} + + + + +- - - + +#### `tf.contrib.distributions.Uniform.log_pdf(x, name='log_pdf')` {#Uniform.log_pdf} + + + + +- - - + +#### `tf.contrib.distributions.Uniform.mean` {#Uniform.mean} + + + + +- - - + +#### `tf.contrib.distributions.Uniform.name` {#Uniform.name} + + + + +- - - + +#### `tf.contrib.distributions.Uniform.pdf(x, name='pdf')` {#Uniform.pdf} + +The PDF of observations in `x` under these Uniform distribution(s). + +##### Args: + + +* `x`: tensor of dtype `dtype`, must be broadcastable with `a` and `b`. +* `name`: The name to give this op. + +##### Returns: + + +* `pdf`: tensor of dtype `dtype`, the pdf values of `x`. If `x` is `nan`, will + return `nan`. + + +- - - + +#### `tf.contrib.distributions.Uniform.range` {#Uniform.range} + +`b - a`. + + +- - - + +#### `tf.contrib.distributions.Uniform.sample(n, seed=None, name='sample')` {#Uniform.sample} + +Sample `n` observations from the Uniform Distributions. + +##### Args: + + +* `n`: `Scalar`, type int32, the number of observations to sample. +* `seed`: Python integer, the random seed. +* `name`: The name to give this op. + +##### Returns: + + +* `samples`: a `Tensor` of shape `(n,) + self.batch_shape + self.event_shape` + with values of type `self.dtype`. + + +- - - + +#### `tf.contrib.distributions.Uniform.variance` {#Uniform.variance} + + + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.normal_conjugates_known_sigma_posterior.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.normal_conjugates_known_sigma_posterior.md deleted file mode 100644 index ae8eb00890..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.normal_conjugates_known_sigma_posterior.md +++ /dev/null @@ -1,48 +0,0 @@ -### `tf.contrib.distributions.normal_conjugates_known_sigma_posterior(prior, sigma, s, n)` {#normal_conjugates_known_sigma_posterior} - -Posterior Normal distribution with conjugate prior on the mean. - -This model assumes that `n` observations (with sum `s`) come from a -Normal with unknown mean `mu` (described by the Normal `prior`) -and known variance `sigma^2`. The "known sigma posterior" is -the distribution of the unknown `mu`. - -Accepts a prior Normal distribution object, having parameters -`mu0` and `sigma0`, as well as known `sigma` values of the predictive -distribution(s) (also assumed Normal), -and statistical estimates `s` (the sum(s) of the observations) and -`n` (the number(s) of observations). - -Returns a posterior (also Normal) distribution object, with parameters -`(mu', sigma'^2)`, where: - -``` -mu ~ N(mu', sigma'^2) -sigma'^2 = 1/(1/sigma0^2 + n/sigma^2), -mu' = (mu0/sigma0^2 + s/sigma^2) * sigma'^2. -``` - -Distribution parameters from `prior`, as well as `sigma`, `s`, and `n`. -will broadcast in the case of multidimensional sets of parameters. - -##### Args: - - -* `prior`: `Normal` object of type `dtype`: - the prior distribution having parameters `(mu0, sigma0)`. -* `sigma`: tensor of type `dtype`, taking values `sigma > 0`. - The known stddev parameter(s). -* `s`: Tensor of type `dtype`. The sum(s) of observations. -* `n`: Tensor of type `int`. The number(s) of observations. - -##### Returns: - - A new Normal posterior distribution object for the unknown observation - mean `mu`. - -##### Raises: - - -* `TypeError`: if dtype of `s` does not match `dtype`, or `prior` is not a - Normal object. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.ffmpeg.decode_audio.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.ffmpeg.decode_audio.md deleted file mode 100644 index 31b9cba01f..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.ffmpeg.decode_audio.md +++ /dev/null @@ -1,25 +0,0 @@ -### `tf.contrib.ffmpeg.decode_audio(contents, file_format=None, samples_per_second=None, channel_count=None)` {#decode_audio} - -Create an op that decodes the contents of an audio file. - -##### Args: - - -* `contents`: The binary contents of the audio file to decode. This is a - scalar. -* `file_format`: A string specifying which format the contents will conform - to. This can be mp3, ogg, or wav. -* `samples_per_second`: The number of samples per second that is assumed. - In some cases, resampling will occur to generate the correct sample - rate. -* `channel_count`: The number of channels that should be created from the - audio contents. If the contents have more than this number, then - some channels will be merged or dropped. If contents has fewer than - this, then additional channels will be created from the existing ones. - -##### Returns: - - A rank 2 tensor that has time along dimension 0 and channels along - dimension 1. Dimension 0 will be `samples_per_second * length` wide, and - dimension 1 will be `channel_count` wide. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.apply_regularization.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.apply_regularization.md new file mode 100644 index 0000000000..8216a4fa25 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.apply_regularization.md @@ -0,0 +1,27 @@ +### `tf.contrib.layers.apply_regularization(regularizer, weights_list=None)` {#apply_regularization} + +Returns the summed penalty by applying `regularizer` to the `weights_list`. + +Adding a regularization penalty over the layer weights and embedding weights +can help prevent overfitting the training data. Regularization over layer +biases is less common/useful, but assuming proper data preprocessing/mean +subtraction, it usually shouldn't hurt much either. + +##### Args: + + +* `regularizer`: A function that takes a single `Tensor` argument and returns + a scalar `Tensor` output. +* `weights_list`: List of weights `Tensors` or `Variables` to apply + `regularizer` over. Defaults to the `GraphKeys.WEIGHTS` collection if + `None`. + +##### Returns: + + A scalar representing the overall regularization penalty. + +##### Raises: + + +* `ValueError`: If `regularizer` does not return a scalar output. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.sum_regularizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.sum_regularizer.md new file mode 100644 index 0000000000..ee05583b04 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.sum_regularizer.md @@ -0,0 +1,14 @@ +### `tf.contrib.layers.sum_regularizer(regularizer_list)` {#sum_regularizer} + +Returns a function that applies the sum of multiple regularizers. + +##### Args: + + +* `regularizer_list`: A list of regularizers to apply. + +##### Returns: + + A function with signature `sum_reg(weights, name=None)` that applies the + sum of all the input regularizers. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.summarize_tensor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.summarize_tensor.md new file mode 100644 index 0000000000..872ba5c9d4 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.summarize_tensor.md @@ -0,0 +1,18 @@ +### `tf.contrib.layers.summarize_tensor(tensor, tag=None)` {#summarize_tensor} + +Summarize a tensor using a suitable summary type. + +This function adds a summary op for `tensor`. The type of summary depends on +the shape of `tensor`. For scalars, a `scalar_summary` is created, for all +other tensors, `histogram_summary` is used. + +##### Args: + + +* `tensor`: The tensor to summarize +* `tag`: The tag to use, if None then use tensor's op's name. + +##### Returns: + + The summary op created or None for string tensors. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.xavier_initializer_conv2d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.xavier_initializer_conv2d.md deleted file mode 100644 index 9deeb48b5b..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.xavier_initializer_conv2d.md +++ /dev/null @@ -1,29 +0,0 @@ -### `tf.contrib.layers.xavier_initializer_conv2d(uniform=True, seed=None, dtype=tf.float32)` {#xavier_initializer_conv2d} - -Returns an initializer performing "Xavier" initialization for weights. - -This function implements the weight initialization from: - -Xavier Glorot and Yoshua Bengio (2010): - Understanding the difficulty of training deep feedforward neural - networks. International conference on artificial intelligence and - statistics. - -This initializer is designed to keep the scale of the gradients roughly the -same in all layers. In uniform distribution this ends up being the range: -`x = sqrt(6. / (in + out)); [-x, x]` and for normal distribution a standard -deviation of `sqrt(3. / (in + out))` is used. - -##### Args: - - -* `uniform`: Whether to use uniform or normal distributed random initialization. -* `seed`: A Python integer. Used to create random seeds. See - [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) - for behavior. -* `dtype`: The data type. Only floating point types are supported. - -##### Returns: - - An initializer for a weight matrix. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.ModeKeys.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.ModeKeys.md deleted file mode 100644 index 83e0bd4119..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.ModeKeys.md +++ /dev/null @@ -1,7 +0,0 @@ -Standard names for model modes. - -The following standard keys are defined: - -* `TRAIN`: training mode. -* `EVAL`: evaluation mode. -* `INFER`: inference mode. diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.TensorFlowDNNRegressor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.TensorFlowDNNRegressor.md deleted file mode 100644 index 182b81de75..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.TensorFlowDNNRegressor.md +++ /dev/null @@ -1,302 +0,0 @@ -TensorFlow DNN Regressor model. - -Parameters: - hidden_units: List of hidden units per layer. - batch_size: Mini batch size. - steps: Number of steps to run over data. - optimizer: Optimizer name (or class), for example "SGD", "Adam", "Adagrad". - learning_rate: If this is constant float value, no decay function is - used. Instead, a customized decay function can be passed that accepts - global_step as parameter and returns a Tensor. - e.g. exponential decay function: - def exp_decay(global_step): - return tf.train.exponential_decay( - learning_rate=0.1, global_step, - decay_steps=2, decay_rate=0.001) - continue_training: when continue_training is True, once initialized - model will be continuely trained on every call of fit. - config: RunConfig object that controls the configurations of the session, - e.g. num_cores, gpu_memory_fraction, etc. - verbose: Controls the verbosity, possible values: - 0: the algorithm and debug information is muted. - 1: trainer prints the progress. - 2: log device placement is printed. - dropout: When not None, the probability we will drop out a given coordinate. -- - - - -#### `tf.contrib.learn.TensorFlowDNNRegressor.__init__(hidden_units, n_classes=0, batch_size=32, steps=200, optimizer='Adagrad', learning_rate=0.1, clip_gradients=5.0, continue_training=False, config=None, verbose=1, dropout=None)` {#TensorFlowDNNRegressor.__init__} - - - - -- - - - -#### `tf.contrib.learn.TensorFlowDNNRegressor.bias_` {#TensorFlowDNNRegressor.bias_} - -Returns bias of the DNN's bias layers. - - -- - - - -#### `tf.contrib.learn.TensorFlowDNNRegressor.evaluate(x=None, y=None, input_fn=None, steps=None)` {#TensorFlowDNNRegressor.evaluate} - -See base class. - - -- - - - -#### `tf.contrib.learn.TensorFlowDNNRegressor.fit(x, y, steps=None, monitors=None, logdir=None)` {#TensorFlowDNNRegressor.fit} - -Builds a neural network model given provided `model_fn` and training -data X and y. - -Note: called first time constructs the graph and initializers -variables. Consecutives times it will continue training the same model. -This logic follows partial_fit() interface in scikit-learn. - -To restart learning, create new estimator. - -##### Args: - - -* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be - iterator that returns arrays of features. The training input - samples for fitting the model. - -* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be - iterator that returns array of targets. The training target values - (class labels in classification, real numbers in regression). - -* `steps`: int, number of steps to train. - If None or 0, train for `self.steps`. -* `monitors`: List of `BaseMonitor` objects to print training progress and - invoke early stopping. -* `logdir`: the directory to save the log file that can be used for - optional visualization. - -##### Returns: - - Returns self. - - -- - - - -#### `tf.contrib.learn.TensorFlowDNNRegressor.get_params(deep=True)` {#TensorFlowDNNRegressor.get_params} - -Get parameters for this estimator. - -##### Args: - - -* `deep`: boolean, optional - If True, will return the parameters for this estimator and - contained subobjects that are estimators. - -##### Returns: - - params : mapping of string to any - Parameter names mapped to their values. - - -- - - - -#### `tf.contrib.learn.TensorFlowDNNRegressor.get_tensor(name)` {#TensorFlowDNNRegressor.get_tensor} - -Returns tensor by name. - -##### Args: - - -* `name`: string, name of the tensor. - -##### Returns: - - Tensor. - - -- - - - -#### `tf.contrib.learn.TensorFlowDNNRegressor.get_tensor_value(name)` {#TensorFlowDNNRegressor.get_tensor_value} - -Returns value of the tensor give by name. - -##### Args: - - -* `name`: string, name of the tensor. - -##### Returns: - - Numpy array - value of the tensor. - - -- - - - -#### `tf.contrib.learn.TensorFlowDNNRegressor.get_variable_names()` {#TensorFlowDNNRegressor.get_variable_names} - -Returns list of all variable names in this model. - -##### Returns: - - List of names. - - -- - - - -#### `tf.contrib.learn.TensorFlowDNNRegressor.model_dir` {#TensorFlowDNNRegressor.model_dir} - - - - -- - - - -#### `tf.contrib.learn.TensorFlowDNNRegressor.partial_fit(x, y)` {#TensorFlowDNNRegressor.partial_fit} - -Incremental fit on a batch of samples. - -This method is expected to be called several times consecutively -on different or the same chunks of the dataset. This either can -implement iterative training or out-of-core/online training. - -This is especially useful when the whole dataset is too big to -fit in memory at the same time. Or when model is taking long time -to converge, and you want to split up training into subparts. - -##### Args: - - -* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be - iterator that returns arrays of features. The training input - samples for fitting the model. - -* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be - iterator that returns array of targets. The training target values - (class label in classification, real numbers in regression). - -##### Returns: - - Returns self. - - -- - - - -#### `tf.contrib.learn.TensorFlowDNNRegressor.predict(x, axis=1, batch_size=None)` {#TensorFlowDNNRegressor.predict} - -Predict class or regression for X. - -For a classification model, the predicted class for each sample in X is -returned. For a regression model, the predicted value based on X is -returned. - -##### Args: - - -* `x`: array-like matrix, [n_samples, n_features...] or iterator. -* `axis`: Which axis to argmax for classification. - By default axis 1 (next after batch) is used. - Use 2 for sequence predictions. -* `batch_size`: If test set is too big, use batch size to split - it into mini batches. By default the batch_size member - variable is used. - -##### Returns: - - -* `y`: array of shape [n_samples]. The predicted classes or predicted - value. - - -- - - - -#### `tf.contrib.learn.TensorFlowDNNRegressor.predict_proba(x, batch_size=None)` {#TensorFlowDNNRegressor.predict_proba} - -Predict class probability of the input samples X. - -##### Args: - - -* `x`: array-like matrix, [n_samples, n_features...] or iterator. -* `batch_size`: If test set is too big, use batch size to split - it into mini batches. By default the batch_size member variable is used. - -##### Returns: - - -* `y`: array of shape [n_samples, n_classes]. The predicted - probabilities for each class. - - -- - - - -#### `tf.contrib.learn.TensorFlowDNNRegressor.restore(cls, path, config=None)` {#TensorFlowDNNRegressor.restore} - -Restores model from give path. - -##### Args: - - -* `path`: Path to the checkpoints and other model information. -* `config`: RunConfig object that controls the configurations of the session, - e.g. num_cores, gpu_memory_fraction, etc. This is allowed to be - reconfigured. - -##### Returns: - - Estimator, object of the subclass of TensorFlowEstimator. - - -- - - - -#### `tf.contrib.learn.TensorFlowDNNRegressor.save(path)` {#TensorFlowDNNRegressor.save} - -Saves checkpoints and graph to given path. - -##### Args: - - -* `path`: Folder to save model to. - - -- - - - -#### `tf.contrib.learn.TensorFlowDNNRegressor.set_params(**params)` {#TensorFlowDNNRegressor.set_params} - -Set the parameters of this estimator. - -The method works on simple estimators as well as on nested objects -(such as pipelines). The former have parameters of the form -``__`` so that it's possible to update each -component of a nested object. - -##### Returns: - - self - - -- - - - -#### `tf.contrib.learn.TensorFlowDNNRegressor.train(input_fn, steps, monitors=None)` {#TensorFlowDNNRegressor.train} - -Trains a model given input builder function. - -##### Args: - - -* `input_fn`: Input builder function, returns tuple of dicts or - dict and Tensor. -* `steps`: number of steps to train model for. -* `monitors`: List of `BaseMonitor` subclass instances. Used for callbacks - inside the training loop. - -##### Returns: - - Returns self. - - -- - - - -#### `tf.contrib.learn.TensorFlowDNNRegressor.weights_` {#TensorFlowDNNRegressor.weights_} - -Returns weights of the DNN weight layers. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.TensorFlowRNNRegressor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.TensorFlowRNNRegressor.md deleted file mode 100644 index d23cd65402..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.TensorFlowRNNRegressor.md +++ /dev/null @@ -1,312 +0,0 @@ -TensorFlow RNN Regressor model. - -Parameters: - rnn_size: The size for rnn cell, e.g. size of your word embeddings. - cell_type: The type of rnn cell, including rnn, gru, and lstm. - num_layers: The number of layers of the rnn model. - input_op_fn: Function that will transform the input tensor, such as - creating word embeddings, byte list, etc. This takes - an argument X for input and returns transformed X. - bidirectional: boolean, Whether this is a bidirectional rnn. - sequence_length: If sequence_length is provided, dynamic calculation is - performed. This saves computational time when unrolling past max sequence - length. - initial_state: An initial state for the RNN. This must be a tensor of - appropriate type and shape [batch_size x cell.state_size]. - batch_size: Mini batch size. - steps: Number of steps to run over data. - optimizer: Optimizer name (or class), for example "SGD", "Adam", "Adagrad". - learning_rate: If this is constant float value, no decay function is - used. Instead, a customized decay function can be passed that accepts - global_step as parameter and returns a Tensor. - e.g. exponential decay function: - def exp_decay(global_step): - return tf.train.exponential_decay( - learning_rate=0.1, global_step, - decay_steps=2, decay_rate=0.001) - continue_training: when continue_training is True, once initialized - model will be continuely trained on every call of fit. - config: RunConfig object that controls the configurations of the - session, e.g. num_cores, gpu_memory_fraction, etc. - verbose: Controls the verbosity, possible values: - 0: the algorithm and debug information is muted. - 1: trainer prints the progress. - 2: log device placement is printed. -- - - - -#### `tf.contrib.learn.TensorFlowRNNRegressor.__init__(rnn_size, cell_type='gru', num_layers=1, input_op_fn=null_input_op_fn, initial_state=None, bidirectional=False, sequence_length=None, n_classes=0, batch_size=32, steps=50, optimizer='Adagrad', learning_rate=0.1, clip_gradients=5.0, continue_training=False, config=None, verbose=1)` {#TensorFlowRNNRegressor.__init__} - - - - -- - - - -#### `tf.contrib.learn.TensorFlowRNNRegressor.bias_` {#TensorFlowRNNRegressor.bias_} - -Returns bias of the rnn layer. - - -- - - - -#### `tf.contrib.learn.TensorFlowRNNRegressor.evaluate(x=None, y=None, input_fn=None, steps=None)` {#TensorFlowRNNRegressor.evaluate} - -See base class. - - -- - - - -#### `tf.contrib.learn.TensorFlowRNNRegressor.fit(x, y, steps=None, monitors=None, logdir=None)` {#TensorFlowRNNRegressor.fit} - -Builds a neural network model given provided `model_fn` and training -data X and y. - -Note: called first time constructs the graph and initializers -variables. Consecutives times it will continue training the same model. -This logic follows partial_fit() interface in scikit-learn. - -To restart learning, create new estimator. - -##### Args: - - -* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be - iterator that returns arrays of features. The training input - samples for fitting the model. - -* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be - iterator that returns array of targets. The training target values - (class labels in classification, real numbers in regression). - -* `steps`: int, number of steps to train. - If None or 0, train for `self.steps`. -* `monitors`: List of `BaseMonitor` objects to print training progress and - invoke early stopping. -* `logdir`: the directory to save the log file that can be used for - optional visualization. - -##### Returns: - - Returns self. - - -- - - - -#### `tf.contrib.learn.TensorFlowRNNRegressor.get_params(deep=True)` {#TensorFlowRNNRegressor.get_params} - -Get parameters for this estimator. - -##### Args: - - -* `deep`: boolean, optional - If True, will return the parameters for this estimator and - contained subobjects that are estimators. - -##### Returns: - - params : mapping of string to any - Parameter names mapped to their values. - - -- - - - -#### `tf.contrib.learn.TensorFlowRNNRegressor.get_tensor(name)` {#TensorFlowRNNRegressor.get_tensor} - -Returns tensor by name. - -##### Args: - - -* `name`: string, name of the tensor. - -##### Returns: - - Tensor. - - -- - - - -#### `tf.contrib.learn.TensorFlowRNNRegressor.get_tensor_value(name)` {#TensorFlowRNNRegressor.get_tensor_value} - -Returns value of the tensor give by name. - -##### Args: - - -* `name`: string, name of the tensor. - -##### Returns: - - Numpy array - value of the tensor. - - -- - - - -#### `tf.contrib.learn.TensorFlowRNNRegressor.get_variable_names()` {#TensorFlowRNNRegressor.get_variable_names} - -Returns list of all variable names in this model. - -##### Returns: - - List of names. - - -- - - - -#### `tf.contrib.learn.TensorFlowRNNRegressor.model_dir` {#TensorFlowRNNRegressor.model_dir} - - - - -- - - - -#### `tf.contrib.learn.TensorFlowRNNRegressor.partial_fit(x, y)` {#TensorFlowRNNRegressor.partial_fit} - -Incremental fit on a batch of samples. - -This method is expected to be called several times consecutively -on different or the same chunks of the dataset. This either can -implement iterative training or out-of-core/online training. - -This is especially useful when the whole dataset is too big to -fit in memory at the same time. Or when model is taking long time -to converge, and you want to split up training into subparts. - -##### Args: - - -* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be - iterator that returns arrays of features. The training input - samples for fitting the model. - -* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be - iterator that returns array of targets. The training target values - (class label in classification, real numbers in regression). - -##### Returns: - - Returns self. - - -- - - - -#### `tf.contrib.learn.TensorFlowRNNRegressor.predict(x, axis=1, batch_size=None)` {#TensorFlowRNNRegressor.predict} - -Predict class or regression for X. - -For a classification model, the predicted class for each sample in X is -returned. For a regression model, the predicted value based on X is -returned. - -##### Args: - - -* `x`: array-like matrix, [n_samples, n_features...] or iterator. -* `axis`: Which axis to argmax for classification. - By default axis 1 (next after batch) is used. - Use 2 for sequence predictions. -* `batch_size`: If test set is too big, use batch size to split - it into mini batches. By default the batch_size member - variable is used. - -##### Returns: - - -* `y`: array of shape [n_samples]. The predicted classes or predicted - value. - - -- - - - -#### `tf.contrib.learn.TensorFlowRNNRegressor.predict_proba(x, batch_size=None)` {#TensorFlowRNNRegressor.predict_proba} - -Predict class probability of the input samples X. - -##### Args: - - -* `x`: array-like matrix, [n_samples, n_features...] or iterator. -* `batch_size`: If test set is too big, use batch size to split - it into mini batches. By default the batch_size member variable is used. - -##### Returns: - - -* `y`: array of shape [n_samples, n_classes]. The predicted - probabilities for each class. - - -- - - - -#### `tf.contrib.learn.TensorFlowRNNRegressor.restore(cls, path, config=None)` {#TensorFlowRNNRegressor.restore} - -Restores model from give path. - -##### Args: - - -* `path`: Path to the checkpoints and other model information. -* `config`: RunConfig object that controls the configurations of the session, - e.g. num_cores, gpu_memory_fraction, etc. This is allowed to be - reconfigured. - -##### Returns: - - Estimator, object of the subclass of TensorFlowEstimator. - - -- - - - -#### `tf.contrib.learn.TensorFlowRNNRegressor.save(path)` {#TensorFlowRNNRegressor.save} - -Saves checkpoints and graph to given path. - -##### Args: - - -* `path`: Folder to save model to. - - -- - - - -#### `tf.contrib.learn.TensorFlowRNNRegressor.set_params(**params)` {#TensorFlowRNNRegressor.set_params} - -Set the parameters of this estimator. - -The method works on simple estimators as well as on nested objects -(such as pipelines). The former have parameters of the form -``__`` so that it's possible to update each -component of a nested object. - -##### Returns: - - self - - -- - - - -#### `tf.contrib.learn.TensorFlowRNNRegressor.train(input_fn, steps, monitors=None)` {#TensorFlowRNNRegressor.train} - -Trains a model given input builder function. - -##### Args: - - -* `input_fn`: Input builder function, returns tuple of dicts or - dict and Tensor. -* `steps`: number of steps to train model for. -* `monitors`: List of `BaseMonitor` subclass instances. Used for callbacks - inside the training loop. - -##### Returns: - - Returns self. - - -- - - - -#### `tf.contrib.learn.TensorFlowRNNRegressor.weights_` {#TensorFlowRNNRegressor.weights_} - -Returns weights of the rnn layer. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.extract_dask_labels.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.extract_dask_labels.md new file mode 100644 index 0000000000..15831ce758 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.extract_dask_labels.md @@ -0,0 +1,4 @@ +### `tf.contrib.learn.extract_dask_labels(labels)` {#extract_dask_labels} + +Extract data from dask.Series for labels + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.read_batch_examples.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.read_batch_examples.md new file mode 100644 index 0000000000..c5cec0542a --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.read_batch_examples.md @@ -0,0 +1,39 @@ +### `tf.contrib.learn.read_batch_examples(file_pattern, batch_size, reader, randomize_input=True, num_epochs=None, queue_capacity=10000, num_threads=1, name=None)` {#read_batch_examples} + +Adds operations to read, queue, batch `Example` protos. + +Given file pattern (or list of files), will setup a queue for file names, +read `Example` proto using provided `reader`, use batch queue to create +batches of examples of size `batch_size`. + +All queue runners are added to the queue runners collection, and may be +started via `start_queue_runners`. + +All ops are added to the default graph. + +##### Args: + + +* `file_pattern`: List of files or pattern of file paths containing + `Example` records. See `tf.gfile.Glob` for pattern rules. +* `batch_size`: An int or scalar `Tensor` specifying the batch size to use. +* `reader`: A function or class that returns an object with + `read` method, (filename tensor) -> (example tensor). +* `randomize_input`: Whether the input should be randomized. +* `num_epochs`: Integer specifying the number of times to read through the + dataset. If `None`, cycles through the dataset forever. + NOTE - If specified, creates a variable that must be initialized, so call + `tf.initialize_all_variables()` as shown in the tests. +* `queue_capacity`: Capacity for input queue. +* `num_threads`: The number of threads enqueuing examples. +* `name`: Name of resulting op. + +##### Returns: + + String `Tensor` of batched `Example` proto. + +##### Raises: + + +* `ValueError`: for invalid inputs. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.read_batch_record_features.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.read_batch_record_features.md deleted file mode 100644 index aa4e964be1..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.read_batch_record_features.md +++ /dev/null @@ -1,33 +0,0 @@ -### `tf.contrib.learn.read_batch_record_features(file_pattern, batch_size, features, randomize_input=True, num_epochs=None, queue_capacity=10000, reader_num_threads=1, parser_num_threads=1, name='dequeue_record_examples')` {#read_batch_record_features} - -Reads TFRecord, queues, batches and parses `Example` proto. - -See more detailed description in `read_examples`. - -##### Args: - - -* `file_pattern`: List of files or pattern of file paths containing - `Example` records. See `tf.gfile.Glob` for pattern rules. -* `batch_size`: An int or scalar `Tensor` specifying the batch size to use. -* `features`: A `dict` mapping feature keys to `FixedLenFeature` or - `VarLenFeature` values. -* `randomize_input`: Whether the input should be randomized. -* `num_epochs`: Integer specifying the number of times to read through the - dataset. If None, cycles through the dataset forever. NOTE - If specified, - creates a variable that must be initialized, so call - tf.initialize_all_variables() as shown in the tests. -* `queue_capacity`: Capacity for input queue. -* `reader_num_threads`: The number of threads to read examples. -* `parser_num_threads`: The number of threads to parse examples. -* `name`: Name of resulting op. - -##### Returns: - - A dict of `Tensor` or `SparseTensor` objects for each in `features`. - -##### Raises: - - -* `ValueError`: for invalid inputs. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.accuracy.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.accuracy.md new file mode 100644 index 0000000000..f41fb78e31 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.accuracy.md @@ -0,0 +1,23 @@ +### `tf.contrib.metrics.accuracy(predictions, labels, weights=None)` {#accuracy} + +Computes the percentage of times that predictions matches labels. + +##### Args: + + +* `predictions`: the predicted values, a `Tensor` whose dtype and shape + matches 'labels'. +* `labels`: the ground truth values, a `Tensor` of any shape and + integer or string dtype. +* `weights`: None or `Tensor` of float values to reweight the accuracy. + +##### Returns: + + Accuracy `Tensor`. + +##### Raises: + + +* `ValueError`: if dtypes don't match or + if dtype is not integer or string. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.auc_using_histogram.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.auc_using_histogram.md deleted file mode 100644 index 01f67e402c..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.auc_using_histogram.md +++ /dev/null @@ -1,38 +0,0 @@ -### `tf.contrib.metrics.auc_using_histogram(boolean_labels, scores, score_range, nbins=100, collections=None, check_shape=True, name=None)` {#auc_using_histogram} - -AUC computed by maintaining histograms. - -Rather than computing AUC directly, this Op maintains Variables containing -histograms of the scores associated with `True` and `False` labels. By -comparing these the AUC is generated, with some discretization error. -See: "Efficient AUC Learning Curve Calculation" by Bouckaert. - -This AUC Op updates in `O(batch_size + nbins)` time and works well even with -large class imbalance. The accuracy is limited by discretization error due -to finite number of bins. If scores are concentrated in a fewer bins, -accuracy is lower. If this is a concern, we recommend trying different -numbers of bins and comparing results. - -##### Args: - - -* `boolean_labels`: 1-D boolean `Tensor`. Entry is `True` if the corresponding - record is in class. -* `scores`: 1-D numeric `Tensor`, same shape as boolean_labels. -* `score_range`: `Tensor` of shape `[2]`, same dtype as `scores`. The min/max - values of score that we expect. Scores outside range will be clipped. -* `nbins`: Integer number of bins to use. Accuracy strictly increases as the - number of bins increases. -* `collections`: List of graph collections keys. Internal histogram Variables - are added to these collections. Defaults to `[GraphKeys.LOCAL_VARIABLES]`. -* `check_shape`: Boolean. If `True`, do a runtime shape check on the scores - and labels. -* `name`: A name for this Op. Defaults to "auc_using_histogram". - -##### Returns: - - -* `auc`: `float32` scalar `Tensor`. Fetching this converts internal histograms - to auc value. -* `update_op`: `Op`, when run, updates internal histograms. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.set_union.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.set_union.md new file mode 100644 index 0000000000..bb378fe2a2 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.set_union.md @@ -0,0 +1,23 @@ +### `tf.contrib.metrics.set_union(a, b, validate_indices=True)` {#set_union} + +Compute set union of elements in last dimension of `a` and `b`. + +All but the last dimension of `a` and `b` must match. + +##### Args: + + +* `a`: `Tensor` or `SparseTensor` of the same type as `b`. If sparse, indices + must be sorted in row-major order. +* `b`: `Tensor` or `SparseTensor` of the same type as `a`. Must be + `SparseTensor` if `a` is `SparseTensor`. If sparse, indices must be + sorted in row-major order. +* `validate_indices`: Whether to validate the order and range of sparse indices + in `a` and `b`. + +##### Returns: + + A `SparseTensor` with the same rank as `a` and `b`, and all but the last + dimension the same. Elements along the last dimension contain the + unions. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.streaming_auc.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.streaming_auc.md new file mode 100644 index 0000000000..2d444fac54 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.streaming_auc.md @@ -0,0 +1,58 @@ +### `tf.contrib.metrics.streaming_auc(predictions, labels, ignore_mask=None, num_thresholds=200, metrics_collections=None, updates_collections=None, name=None)` {#streaming_auc} + +Computes the approximate AUC via a Riemann sum. + +The `streaming_auc` function creates four local variables, `true_positives`, +`true_negatives`, `false_positives` and `false_negatives` that are used to +compute the AUC. To discretize the AUC curve, a linearly spaced set of +thresholds is used to compute pairs of recall and precision values. The area +under the curve is therefore computed using the height of the recall values +by the false positive rate. + +This value is ultimately returned as `auc`, an idempotent +operation the computes the area under a discretized curve of precision versus +recall values (computed using the afformentioned variables). The +`num_thresholds` variable controls the degree of discretization with larger +numbers of thresholds more closely approximating the true AUC. + +To faciliate the estimation of the AUC over a stream of data, the function +creates an `update_op` operation whose behavior is dependent on the value of +`ignore_mask`. If `ignore_mask` is None, then `update_op` increments the +`true_positives`, `true_negatives`, `false_positives` and `false_negatives` +counts with the number of each found in the current `predictions` and `labels` +`Tensors`. If `ignore_mask` is not `None`, then the increment is performed +using only the elements of `predictions` and `labels` whose corresponding +value in `ignore_mask` is `False`. In addition to performing the updates, +`update_op` also returns the `auc`. + +##### Args: + + +* `predictions`: A floating point `Tensor` of arbitrary shape and whose values + are in the range `[0, 1]`. +* `labels`: A binary `Tensor` whose shape matches `predictions`. +* `ignore_mask`: An optional, binary tensor whose size matches `predictions`. +* `num_thresholds`: The number of thresholds to use when discretizing the roc + curve. +* `metrics_collections`: An optional list of collections that `auc` should be + added to. +* `updates_collections`: An optional list of collections that `update_op` should + be added to. +* `name`: An optional variable_op_scope name. + +##### Returns: + + +* `auc`: A scalar tensor representing the current area-under-curve. +* `update_op`: An operation that increments the `true_positives`, + `true_negatives`, `false_positives` and `false_negatives` variables + appropriately and whose value matches `auc`. + +##### Raises: + + +* `ValueError`: If the shape of `predictions` and `labels` do not match or if + `ignore_mask` is not `None` and its shape doesn't match `predictions` or + if either `metrics_collections` or `updates_collections` are not a list or + tuple. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.util.make_ndarray.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.util.make_ndarray.md deleted file mode 100644 index 7b2a81d48e..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.util.make_ndarray.md +++ /dev/null @@ -1,20 +0,0 @@ -### `tf.contrib.util.make_ndarray(tensor)` {#make_ndarray} - -Create a numpy ndarray from a tensor. - -Create a numpy ndarray with the same shape and data as the tensor. - -##### Args: - - -* `tensor`: A TensorProto. - -##### Returns: - - A numpy array with the tensor contents. - -##### Raises: - - -* `TypeError`: if tensor has unsupported type. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.util.ops_used_by_graph_def.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.util.ops_used_by_graph_def.md deleted file mode 100644 index 38a9cc4f43..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.util.ops_used_by_graph_def.md +++ /dev/null @@ -1,15 +0,0 @@ -### `tf.contrib.util.ops_used_by_graph_def(graph_def)` {#ops_used_by_graph_def} - -Collect the list of ops used by a graph. - -Does not validate that the ops are all registered. - -##### Args: - - -* `graph_def`: A `GraphDef` proto, as from `graph.as_graph_def()`. - -##### Returns: - - A list of strings, each naming an op used by the graph. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.util.stripped_op_list_for_graph.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.util.stripped_op_list_for_graph.md new file mode 100644 index 0000000000..23bfb28542 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.util.stripped_op_list_for_graph.md @@ -0,0 +1,23 @@ +### `tf.contrib.util.stripped_op_list_for_graph(graph_def)` {#stripped_op_list_for_graph} + +Collect the stripped OpDefs for ops used by a graph. + +This function computes the `stripped_op_list` field of `MetaGraphDef` and +similar protos. The result can be communicated from the producer to the +consumer, which can then use the C++ function +`RemoveNewDefaultAttrsFromGraphDef` to improve forwards compatibility. + +##### Args: + + +* `graph_def`: A `GraphDef` proto, as from `graph.as_graph_def()`. + +##### Returns: + + An `OpList` of ops used by the graph. + +##### Raises: + + +* `ValueError`: If an unregistered op is used. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.convert_to_tensor_or_indexed_slices.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.convert_to_tensor_or_indexed_slices.md deleted file mode 100644 index 0c65e8327c..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.convert_to_tensor_or_indexed_slices.md +++ /dev/null @@ -1,27 +0,0 @@ -### `tf.convert_to_tensor_or_indexed_slices(value, dtype=None, name=None, as_ref=False)` {#convert_to_tensor_or_indexed_slices} - -Converts the given object to a `Tensor` or an `IndexedSlices`. - -If `value` is an `IndexedSlices` or `SparseTensor` it is returned -unmodified. Otherwise, it is converted to a `Tensor` using -`convert_to_tensor()`. - -##### Args: - - -* `value`: An `IndexedSlices`, `SparseTensor`, or an object that can be consumed - by `convert_to_tensor()`. -* `dtype`: (Optional.) The required `DType` of the returned `Tensor` or - `IndexedSlices`. -* `name`: (Optional.) A name to use if a new `Tensor` is created. -* `as_ref`: True if the caller wants the results as ref tensors. - -##### Returns: - - An `Tensor`, `IndexedSlices`, or `SparseTensor` based on `value`. - -##### Raises: - - -* `ValueError`: If `dtype` does not match the element type of `value`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.depth_to_space.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.depth_to_space.md deleted file mode 100644 index c0117c82c7..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.depth_to_space.md +++ /dev/null @@ -1,95 +0,0 @@ -### `tf.depth_to_space(input, block_size, name=None)` {#depth_to_space} - -DepthToSpace for tensors of type T. - -Rearranges data from depth into blocks of spatial data. -This is the reverse transformation of SpaceToDepth. More specifically, -this op outputs a copy of the input tensor where values from the `depth` -dimension are moved in spatial blocks to the `height` and `width` dimensions. -The attr `block_size` indicates the input block size and how the data is moved. - - * Chunks of data of size `block_size * block_size` from depth are rearranged - into non-overlapping blocks of size `block_size x block_size` - * The width the output tensor is `input_depth * block_size`, whereas the - height is `input_height * block_size`. - * The depth of the input tensor must be divisible by - `block_size * block_size`. - -That is, assuming the input is in the shape: -`[batch, height, width, depth]`, -the shape of the output will be: -`[batch, height*block_size, width*block_size, depth/(block_size*block_size)]` - -This operation requires that the input tensor be of rank 4, and that -`block_size` be >=1 and that `block_size * block_size` be a divisor of the -input depth. - -This operation is useful for resizing the activations between convolutions -(but keeping all data), e.g. instead of pooling. It is also useful for training -purely convolutional models. - -For example, given this input of shape `[1, 1, 1, 4]`, and a block size of 2: - -```prettyprint -x = [[[[1, 2, 3, 4]]]] - -``` - -This operation will output a tensor of shape `[1, 2, 2, 1]`: - -```prettyprint - [[[[1], [2]], - [[3], [4]]]] -``` - -Here, the input has a batch of 1 and each batch element has shape `[1, 1, 4]`, -the corresponding output will have 2x2 elements and will have a depth of -1 channel (1 = `4 / (block_size * block_size)`). -The output element shape is `[2, 2, 1]`. - -For an input tensor with larger depth, here of shape `[1, 1, 1, 12]`, e.g. - -```prettyprint -x = [[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]] -``` - -This operation, for block size of 2, will return the following tensor of shape -`[1, 2, 2, 3]` - -```prettyprint - [[[[1, 2, 3], [4, 5, 6]], - [[7, 8, 9], [10, 11, 12]]]] - -``` - -Similarly, for the following input of shape `[1 2 2 4]`, and a block size of 2: - -```prettyprint -x = [[[[1, 2, 3, 4], - [5, 6, 7, 8]], - [[9, 10, 11, 12], - [13, 14, 15, 16]]]] -``` - -the operator will return the following tensor of shape `[1 4 4 1]`: - -```prettyprint -x = [[ [1], [2], [5], [6]], - [ [3], [4], [7], [8]], - [ [9], [10], [13], [14]], - [ [11], [12], [15], [16]]] - -``` - -##### Args: - - -* `input`: A `Tensor`. -* `block_size`: An `int`. - The size of the spatial block, same as in Space2Depth. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `input`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.erf.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.erf.md deleted file mode 100644 index 3a425b7c4a..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.erf.md +++ /dev/null @@ -1,14 +0,0 @@ -### `tf.erf(x, name=None)` {#erf} - -Computes the Gauss error function of `x` element-wise. - -##### Args: - - -* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `x`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.errors.DeadlineExceededError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.errors.DeadlineExceededError.md deleted file mode 100644 index e8ef3be06e..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.errors.DeadlineExceededError.md +++ /dev/null @@ -1,11 +0,0 @@ -Raised when a deadline expires before an operation could complete. - -This exception is not currently used. - -- - - - -#### `tf.errors.DeadlineExceededError.__init__(node_def, op, message)` {#DeadlineExceededError.__init__} - -Creates a `DeadlineExceededError`. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.errors.NotFoundError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.errors.NotFoundError.md deleted file mode 100644 index 49fec3c55c..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.errors.NotFoundError.md +++ /dev/null @@ -1,14 +0,0 @@ -Raised when a requested entity (e.g., a file or directory) was not found. - -For example, running the -[`tf.WholeFileReader.read()`](../../api_docs/python/io_ops.md#WholeFileReader) -operation could raise `NotFoundError` if it receives the name of a file that -does not exist. - -- - - - -#### `tf.errors.NotFoundError.__init__(node_def, op, message)` {#NotFoundError.__init__} - -Creates a `NotFoundError`. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.exp.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.exp.md deleted file mode 100644 index 6bceeabd27..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.exp.md +++ /dev/null @@ -1,14 +0,0 @@ -### `tf.exp(x, name=None)` {#exp} - -Computes exponential of x element-wise. \\(y = e^x\\). - -##### Args: - - -* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `x`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.gather_nd.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.gather_nd.md new file mode 100644 index 0000000000..7c8777a660 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.gather_nd.md @@ -0,0 +1,30 @@ +### `tf.gather_nd(params, indices, name=None)` {#gather_nd} + +Gather values from `params` according to `indices`. + +`indices` must be integer tensor, containing indices into `params`. +It must be shape `[d_0, ..., d_N, R]` where `R` is the rank of `params`. +The innermost dimension of `indices` (with length `R`) corresponds to the +indices of `params`. + +Produces an output tensor with shape `[d_0, ..., d_{n-1}]` where: + + output[i, j, k, ...] = params[indices[i, j, k, ..., :]] + +e.g. for `indices` a matrix: + + output[i] = params[indices[i, :]] + +##### Args: + + +* `params`: A `Tensor`. R-D. The tensor from which to gather values. +* `indices`: A `Tensor`. Must be one of the following types: `int32`, `int64`. + (N+1)-D. Index tensor having shape `[d_0, ..., d_N, R]`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `params`. + N-D. Values from `params` gathered from indices given by `indices`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.get_collection_ref.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.get_collection_ref.md new file mode 100644 index 0000000000..c393da2233 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.get_collection_ref.md @@ -0,0 +1,20 @@ +### `tf.get_collection_ref(key)` {#get_collection_ref} + +Wrapper for `Graph.get_collection_ref()` using the default graph. + +See [`Graph.get_collection_ref()`](../../api_docs/python/framework.md#Graph.get_collection_ref) +for more details. + +##### Args: + + +* `key`: The key for the collection. For example, the `GraphKeys` class + contains many standard names for collections. + +##### Returns: + + The list of values in the collection with the given `name`, or an empty + list if no value has been added to that collection. Note that this returns + the collection list itself, which can be modified in place to change the + collection. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.gradients.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.gradients.md deleted file mode 100644 index ea710b2a15..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.gradients.md +++ /dev/null @@ -1,48 +0,0 @@ -### `tf.gradients(ys, xs, grad_ys=None, name='gradients', colocate_gradients_with_ops=False, gate_gradients=False, aggregation_method=None)` {#gradients} - -Constructs symbolic partial derivatives of sum of `ys` w.r.t. x in `xs`. - -`ys` and `xs` are each a `Tensor` or a list of tensors. `grad_ys` -is a list of `Tensor`, holding the gradients received by the -`ys`. The list must be the same length as `ys`. - -`gradients()` adds ops to the graph to output the partial -derivatives of `ys` with respect to `xs`. It returns a list of -`Tensor` of length `len(xs)` where each tensor is the `sum(dy/dx)` -for y in `ys`. - -`grad_ys` is a list of tensors of the same length as `ys` that holds -the initial gradients for each y in `ys`. When `grad_ys` is None, -we fill in a tensor of '1's of the shape of y for each y in `ys`. A -user can provide their own initial `grad_ys` to compute the -derivatives using a different initial gradient for each y (e.g., if -one wanted to weight the gradient differently for each value in -each y). - -##### Args: - - -* `ys`: A `Tensor` or list of tensors to be differentiated. -* `xs`: A `Tensor` or list of tensors to be used for differentiation. -* `grad_ys`: Optional. A `Tensor` or list of tensors the same size as - `ys` and holding the gradients computed for each y in `ys`. -* `name`: Optional name to use for grouping all the gradient ops together. - defaults to 'gradients'. -* `colocate_gradients_with_ops`: If True, try colocating gradients with - the corresponding op. -* `gate_gradients`: If True, add a tuple around the gradients returned - for an operations. This avoids some race conditions. -* `aggregation_method`: Specifies the method used to combine gradient terms. - Accepted values are constants defined in the class `AggregationMethod`. - -##### Returns: - - A list of `sum(dy/dx)` for each x in `xs`. - -##### Raises: - - -* `LookupError`: if one of the operations between `x` and `y` does not - have a registered gradient function. -* `ValueError`: if the arguments are invalid. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.greater.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.greater.md new file mode 100644 index 0000000000..c629a0286f --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.greater.md @@ -0,0 +1,15 @@ +### `tf.greater(x, y, name=None)` {#greater} + +Returns the truth value of (x > y) element-wise. + +##### Args: + + +* `x`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`. +* `y`: A `Tensor`. Must have the same type as `x`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `bool`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.identity.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.identity.md deleted file mode 100644 index 13f1318601..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.identity.md +++ /dev/null @@ -1,14 +0,0 @@ -### `tf.identity(input, name=None)` {#identity} - -Return a tensor with the same shape and contents as the input tensor or value. - -##### Args: - - -* `input`: A `Tensor`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `input`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.ifft3d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.ifft3d.md deleted file mode 100644 index 35d58888ac..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.ifft3d.md +++ /dev/null @@ -1,15 +0,0 @@ -### `tf.ifft3d(input, name=None)` {#ifft3d} - -Compute the inverse 3-dimensional discrete Fourier Transform. - -##### Args: - - -* `input`: A `Tensor` of type `complex64`. A complex64 3-D tensor. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `complex64`. - The inverse 3D Fourier Transform of `input`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.adjust_contrast.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.adjust_contrast.md deleted file mode 100644 index 2fbf1b3e2a..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.adjust_contrast.md +++ /dev/null @@ -1,29 +0,0 @@ -### `tf.image.adjust_contrast(images, contrast_factor)` {#adjust_contrast} - -Adjust contrast of RGB or grayscale images. - -This is a convenience method that converts an RGB image to float -representation, adjusts its contrast, and then converts it back to the -original data type. If several adjustments are chained it is advisable to -minimize the number of redundant conversions. - -`images` is a tensor of at least 3 dimensions. The last 3 dimensions are -interpreted as `[height, width, channels]`. The other dimensions only -represent a collection of images, such as `[batch, height, width, channels].` - -Contrast is adjusted independently for each channel of each image. - -For each channel, this Op computes the mean of the image pixels in the -channel and then adjusts each component `x` of each pixel to -`(x - mean) * contrast_factor + mean`. - -##### Args: - - -* `images`: Images to adjust. At least 3-D. -* `contrast_factor`: A float multiplier for adjusting contrast. - -##### Returns: - - The contrast-adjusted image or images. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.adjust_hue.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.adjust_hue.md new file mode 100644 index 0000000000..e334e26184 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.adjust_hue.md @@ -0,0 +1,26 @@ +### `tf.image.adjust_hue(image, delta, name=None)` {#adjust_hue} + +Adjust hue of an RGB image. + +This is a convenience method that converts an RGB image to float +representation, converts it to HSV, add an offset to the hue channel, converts +back to RGB and then back to the original data type. If several adjustments +are chained it is advisable to minimize the number of redundant conversions. + +`image` is an RGB image. The image hue is adjusted by converting the +image to HSV and rotating the hue channel (H) by +`delta`. The image is then converted back to RGB. + +`delta` must be in the interval `[-1, 1]`. + +##### Args: + + +* `image`: RGB image or images. Size of the last dimension must be 3. +* `delta`: float. How much to add to the hue channel. +* `name`: A name for this operation (optional). + +##### Returns: + + Adjusted image(s), same shape and DType as `image`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.adjust_saturation.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.adjust_saturation.md deleted file mode 100644 index 1829271ff6..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.adjust_saturation.md +++ /dev/null @@ -1,25 +0,0 @@ -### `tf.image.adjust_saturation(image, saturation_factor, name=None)` {#adjust_saturation} - -Adjust saturation of an RGB image. - -This is a convenience method that converts an RGB image to float -representation, converts it to HSV, add an offset to the saturation channel, -converts back to RGB and then back to the original data type. If several -adjustments are chained it is advisable to minimize the number of redundant -conversions. - -`image` is an RGB image. The image saturation is adjusted by converting the -image to HSV and multiplying the saturation (S) channel by -`saturation_factor` and clipping. The image is then converted back to RGB. - -##### Args: - - -* `image`: RGB image or images. Size of the last dimension must be 3. -* `saturation_factor`: float. Factor to multiply the saturation by. -* `name`: A name for this operation (optional). - -##### Returns: - - Adjusted image(s), same shape and DType as `image`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.extract_glimpse.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.extract_glimpse.md new file mode 100644 index 0000000000..e0ca72e2c5 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.extract_glimpse.md @@ -0,0 +1,31 @@ +### `tf.image.extract_glimpse(input, size, offsets, centered=None, normalized=None, uniform_noise=None, name=None)` {#extract_glimpse} + +Extracts a glimpse from the input tensor. + +Returns a set of windows called glimpses extracted at location +`offsets` from the input tensor. If the windows only partially +overlaps the inputs, the non overlapping areas will be filled with +random noise. + +The result is a 4-D tensor of shape `[batch_size, glimpse_height, +glimpse_width, channels]`. The channels and batch dimensions are the +same as that of the input tensor. The height and width of the output +windows are specified in the `size` parameter. + +The argument `normalized` and `centered` controls how the windows are + +##### Args: + + +* `input`: A `Tensor` of type `float32`. +* `size`: A `Tensor` of type `int32`. +* `offsets`: A `Tensor` of type `float32`. +* `centered`: An optional `bool`. Defaults to `True`. +* `normalized`: An optional `bool`. Defaults to `True`. +* `uniform_noise`: An optional `bool`. Defaults to `True`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `float32`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.grayscale_to_rgb.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.grayscale_to_rgb.md deleted file mode 100644 index 755b66141b..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.grayscale_to_rgb.md +++ /dev/null @@ -1,17 +0,0 @@ -### `tf.image.grayscale_to_rgb(images, name=None)` {#grayscale_to_rgb} - -Converts one or more images from Grayscale to RGB. - -Outputs a tensor of the same `DType` and rank as `images`. The size of the -last dimension of the output is 3, containing the RGB value of the pixels. - -##### Args: - - -* `images`: The Grayscale tensor to convert. Last dimension must be size 1. -* `name`: A name for the operation (optional). - -##### Returns: - - The converted grayscale image(s). - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.random_hue.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.random_hue.md new file mode 100644 index 0000000000..09a4ebc17f --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.random_hue.md @@ -0,0 +1,28 @@ +### `tf.image.random_hue(image, max_delta, seed=None)` {#random_hue} + +Adjust the hue of an RGB image by a random factor. + +Equivalent to `adjust_hue()` but uses a `delta` randomly +picked in the interval `[-max_delta, max_delta]`. + +`max_delta` must be in the interval `[0, 0.5]`. + +##### Args: + + +* `image`: RGB image or images. Size of the last dimension must be 3. +* `max_delta`: float. Maximum value for the random delta. +* `seed`: An operation-specific seed. It will be used in conjunction + with the graph-level seed to determine the real seeds that will be + used in this operation. Please see the documentation of + set_random_seed for its interaction with the graph-level random seed. + +##### Returns: + + 3-D float tensor of shape `[height, width, channels]`. + +##### Raises: + + +* `ValueError`: if `max_delta` is invalid. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image_summary.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image_summary.md new file mode 100644 index 0000000000..5df729544b --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image_summary.md @@ -0,0 +1,48 @@ +### `tf.image_summary(tag, tensor, max_images=3, collections=None, name=None)` {#image_summary} + +Outputs a `Summary` protocol buffer with images. + +The summary has up to `max_images` summary values containing images. The +images are built from `tensor` which must be 4-D with shape `[batch_size, +height, width, channels]` and where `channels` can be: + +* 1: `tensor` is interpreted as Grayscale. +* 3: `tensor` is interpreted as RGB. +* 4: `tensor` is interpreted as RGBA. + +The images have the same number of channels as the input tensor. For float +input, the values are normalized one image at a time to fit in the range +`[0, 255]`. `uint8` values are unchanged. The op uses two different +normalization algorithms: + +* If the input values are all positive, they are rescaled so the largest one + is 255. + +* If any input value is negative, the values are shifted so input value 0.0 + is at 127. They are then rescaled so that either the smallest value is 0, + or the largest one is 255. + +The `tag` argument is a scalar `Tensor` of type `string`. It is used to +build the `tag` of the summary values: + +* If `max_images` is 1, the summary value tag is '*tag*/image'. +* If `max_images` is greater than 1, the summary value tags are + generated sequentially as '*tag*/image/0', '*tag*/image/1', etc. + +##### Args: + + +* `tag`: A scalar `Tensor` of type `string`. Used to build the `tag` + of the summary values. +* `tensor`: A 4-D `uint8` or `float32` `Tensor` of shape `[batch_size, height, + width, channels]` where `channels` is 1, 3, or 4. +* `max_images`: Max number of batch elements to generate images for. +* `collections`: Optional list of ops.GraphKeys. The collections to add the + summary to. Defaults to [ops.GraphKeys.SUMMARIES] +* `name`: A name for the operation (optional). + +##### Returns: + + A scalar `Tensor` of type `string`. The serialized `Summary` protocol + buffer. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.is_non_decreasing.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.is_non_decreasing.md new file mode 100644 index 0000000000..f10ff932c0 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.is_non_decreasing.md @@ -0,0 +1,25 @@ +### `tf.is_non_decreasing(x, name=None)` {#is_non_decreasing} + +Returns `True` if `x` is non-decreasing. + +Elements of `x` are compared in row-major order. The tensor `[x[0],...]` +is non-decreasing if for every adjacent pair we have `x[i] <= x[i+1]`. +If `x` has less than two elements, it is trivially non-decreasing. + +See also: `is_strictly_increasing` + +##### Args: + + +* `x`: Numeric `Tensor`. +* `name`: A name for this operation (optional). Defaults to "is_non_decreasing" + +##### Returns: + + Boolean `Tensor`, equal to `True` iff `x` is non-decreasing. + +##### Raises: + + +* `TypeError`: if `x` is not a numeric tensor. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.is_strictly_increasing.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.is_strictly_increasing.md new file mode 100644 index 0000000000..bdaedd519e --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.is_strictly_increasing.md @@ -0,0 +1,26 @@ +### `tf.is_strictly_increasing(x, name=None)` {#is_strictly_increasing} + +Returns `True` if `x` is strictly increasing. + +Elements of `x` are compared in row-major order. The tensor `[x[0],...]` +is strictly increasing if for every adjacent pair we have `x[i] < x[i+1]`. +If `x` has less than two elements, it is trivially strictly increasing. + +See also: `is_non_decreasing` + +##### Args: + + +* `x`: Numeric `Tensor`. +* `name`: A name for this operation (optional). + Defaults to "is_strictly_increasing" + +##### Returns: + + Boolean `Tensor`, equal to `True` iff `x` is strictly increasing. + +##### Raises: + + +* `TypeError`: if `x` is not a numeric tensor. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.less_equal.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.less_equal.md deleted file mode 100644 index 65d7eb5084..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.less_equal.md +++ /dev/null @@ -1,15 +0,0 @@ -### `tf.less_equal(x, y, name=None)` {#less_equal} - -Returns the truth value of (x <= y) element-wise. - -##### Args: - - -* `x`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`. -* `y`: A `Tensor`. Must have the same type as `x`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `bool`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.make_template.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.make_template.md deleted file mode 100644 index bb0cff57cd..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.make_template.md +++ /dev/null @@ -1,105 +0,0 @@ -### `tf.make_template(name_, func_, create_scope_now_=False, **kwargs)` {#make_template} - -Given an arbitrary function, wrap it so that it does variable sharing. - -This wraps `func_` in a Template and partially evaluates it. Templates are -functions that create variables the first time they are called and reuse them -thereafter. In order for `func_` to be compatible with a `Template` it must -have the following properties: - -* The function should create all trainable variables and any variables that - should be reused by calling `tf.get_variable`. If a trainable variable is - created using `tf.Variable`, then a ValueError will be thrown. Variables - that are intended to be locals can be created by specifying - `tf.Variable(..., trainable=false)`. -* The function may use variable scopes and other templates internally to - create and reuse variables, but it shouldn't use `tf.get_variables` to - capture variables that are defined outside of the scope of the function. -* Internal scopes and variable names should not depend on any arguments that - are not supplied to `make_template`. In general you will get a ValueError - telling you that you are trying to reuse a variable that doesn't exist - if you make a mistake. - -In the following example, both `z` and `w` will be scaled by the same `y`. It -is important to note that if we didn't assign `scalar_name` and used a -different name for z and w that a `ValueError` would be thrown because it -couldn't reuse the variable. - -```python -def my_op(x, scalar_name): - var1 = tf.get_variable(scalar_name, - shape=[], - initializer=tf.constant_initializer(1)) - return x * var1 - -scale_by_y = tf.make_template('scale_by_y', my_op, scalar_name='y') - -z = scale_by_y(input1) -w = scale_by_y(input2) -``` - -As a safe-guard, the returned function will raise a `ValueError` after the -first call if trainable variables are created by calling `tf.Variable`. - -If all of these are true, then 2 properties are enforced by the template: - -1. Calling the same template multiple times will share all non-local - variables. -2. Two different templates are guaranteed to be unique, unless you reenter the - same variable scope as the initial definition of a template and redefine - it. An examples of this exception: - -```python -def my_op(x, scalar_name): - var1 = tf.get_variable(scalar_name, - shape=[], - initializer=tf.constant_initializer(1)) - return x * var1 - -with tf.variable_scope('scope') as vs: - scale_by_y = tf.make_template('scale_by_y', my_op, scalar_name='y') - z = scale_by_y(input1) - w = scale_by_y(input2) - -# Creates a template that reuses the variables above. -with tf.variable_scope(vs, reuse=True): - scale_by_y2 = tf.make_template('scale_by_y', my_op, scalar_name='y') - z2 = scale_by_y2(input1) - w2 = scale_by_y2(input2) -``` - -Depending on the value of `create_scope_now_`, the full variable scope may be -captured either at the time of first call or at the time of construction. If -this option is set to True, then all Tensors created by repeated calls to the -template will have an extra trailing _N+1 to their name, as the first time the -scope is entered in the Template constructor no Tensors are created. - -Note: `name_`, `func_` and `create_scope_now_` have a trailing underscore to -reduce the likelihood of collisions with kwargs. - -##### Args: - - -* `name_`: A name for the scope created by this template. If necessary, the name - will be made unique by appending `_N` to the name. -* `func_`: The function to wrap. -* `create_scope_now_`: Boolean controlling whether the scope should be created - when the template is constructed or when the template is called. Default - is False, meaning the scope is created when the template is called. -* `**kwargs`: Keyword arguments to apply to `func_`. - -##### Returns: - - A function to encapsulate a set of variables which should be created once - and reused. An enclosing scope will created, either where `make_template` - is called, or wherever the result is called, depending on the value of - `create_scope_now_`. Regardless of the value, the first time the template - is called it will enter the scope with no reuse, and call `func_` to create - variables, which are guaranteed to be unique. All subsequent calls will - re-enter the scope and reuse those variables. - -##### Raises: - - -* `ValueError`: if the name is None. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.multinomial.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.multinomial.md deleted file mode 100644 index b5bf7a30a5..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.multinomial.md +++ /dev/null @@ -1,28 +0,0 @@ -### `tf.multinomial(logits, num_samples, seed=None, name=None)` {#multinomial} - -Draws samples from a multinomial distribution. - -Example: - - samples = tf.multinomial(tf.log([[0.5, 0.5]]), 10) - # samples has shape [1, 10], where each value is either 0 or 1. - - samples = tf.multinomial([[1, -1, -1]], 10) - # samples is equivalent to tf.zeros([1, 10], dtype=tf.int64). - -##### Args: - - -* `logits`: 2-D Tensor with shape `[batch_size, num_classes]`. Each slice - `[i, :]` represents the unnormalized log probabilities for all classes. -* `num_samples`: 0-D. Number of independent samples to draw for each row slice. -* `seed`: A Python integer. Used to create a random seed for the distribution. - See - [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) - for behavior. -* `name`: Optional name for the operation. - -##### Returns: - - The drawn samples of shape `[batch_size, num_samples]`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.avg_pool3d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.avg_pool3d.md new file mode 100644 index 0000000000..76503e0567 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.avg_pool3d.md @@ -0,0 +1,24 @@ +### `tf.nn.avg_pool3d(input, ksize, strides, padding, name=None)` {#avg_pool3d} + +Performs 3D average pooling on the input. + +##### Args: + + +* `input`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`. + Shape `[batch, depth, rows, cols, channels]` tensor to pool over. +* `ksize`: A list of `ints` that has length `>= 5`. + 1-D tensor of length 5. The size of the window for each dimension of + the input tensor. Must have `ksize[0] = ksize[1] = 1`. +* `strides`: A list of `ints` that has length `>= 5`. + 1-D tensor of length 5. The stride of the sliding window for each + dimension of `input`. Must have `strides[0] = strides[4] = 1`. +* `padding`: A `string` from: `"SAME", "VALID"`. + The type of padding algorithm to use. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `input`. + The average pooled output tensor. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.bias_add.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.bias_add.md deleted file mode 100644 index 1eea161f23..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.bias_add.md +++ /dev/null @@ -1,24 +0,0 @@ -### `tf.nn.bias_add(value, bias, data_format=None, name=None)` {#bias_add} - -Adds `bias` to `value`. - -This is (mostly) a special case of `tf.add` where `bias` is restricted to 1-D. -Broadcasting is supported, so `value` may have any number of dimensions. -Unlike `tf.add`, the type of `bias` is allowed to differ from `value` in the -case where both types are quantized. - -##### Args: - - -* `value`: A `Tensor` with type `float`, `double`, `int64`, `int32`, `uint8`, - `int16`, `int8`, or `complex64`. -* `bias`: A 1-D `Tensor` with size matching the last dimension of `value`. - Must be the same type as `value` unless `value` is a quantized type, - in which case a different quantized type may be used. -* `data_format`: A string. 'NHWC' and 'NCHW' are supported. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` with the same type as `value`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.depthwise_conv2d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.depthwise_conv2d.md new file mode 100644 index 0000000000..7bacc20da8 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.depthwise_conv2d.md @@ -0,0 +1,37 @@ +### `tf.nn.depthwise_conv2d(input, filter, strides, padding, name=None)` {#depthwise_conv2d} + +Depthwise 2-D convolution. + +Given an input tensor of shape `[batch, in_height, in_width, in_channels]` +and a filter tensor of shape +`[filter_height, filter_width, in_channels, channel_multiplier]` +containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` +applies a different filter to each input channel (expanding from 1 channel +to `channel_multiplier` channels for each), then concatenates the results +together. The output has `in_channels * channel_multiplier` channels. + +In detail, + + output[b, i, j, k * channel_multiplier + q] = + sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] * + filter[di, dj, k, q] + +Must have `strides[0] = strides[3] = 1`. For the most common case of the +same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. + +##### Args: + + +* `input`: 4-D with shape `[batch, in_height, in_width, in_channels]`. +* `filter`: 4-D with shape + `[filter_height, filter_width, in_channels, channel_multiplier]`. +* `strides`: 1-D of size 4. The stride of the sliding window for each + dimension of `input`. +* `padding`: A string, either `'VALID'` or `'SAME'`. The padding algorithm. +* `name`: A name for this operation (optional). + +##### Returns: + + A 4-D `Tensor` of shape + `[batch, out_height, out_width, in_channels * channel_multiplier].` + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.embedding_lookup_sparse.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.embedding_lookup_sparse.md new file mode 100644 index 0000000000..03997f7813 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.embedding_lookup_sparse.md @@ -0,0 +1,66 @@ +### `tf.nn.embedding_lookup_sparse(params, sp_ids, sp_weights, partition_strategy='mod', name=None, combiner='mean')` {#embedding_lookup_sparse} + +Computes embeddings for the given ids and weights. + +This op assumes that there is at least one id for each row in the dense tensor +represented by sp_ids (i.e. there are no rows with empty features), and that +all the indices of sp_ids are in canonical row-major order. + +It also assumes that all id values lie in the range [0, p0), where p0 +is the sum of the size of params along dimension 0. + +##### Args: + + +* `params`: A single tensor representing the complete embedding tensor, + or a list of P tensors all of same shape except for the first dimension, + representing sharded embedding tensors. +* `sp_ids`: N x M SparseTensor of int64 ids (typically from FeatureValueToId), + where N is typically batch size and M is arbitrary. +* `sp_weights`: either a SparseTensor of float / double weights, or None to + indicate all weights should be taken to be 1. If specified, sp_weights + must have exactly the same shape and indices as sp_ids. +* `partition_strategy`: A string specifying the partitioning strategy, relevant + if `len(params) > 1`. Currently `"div"` and `"mod"` are supported. Default + is `"mod"`. See `tf.nn.embedding_lookup` for more details. +* `name`: Optional name for the op. +* `combiner`: A string specifying the reduction op. Currently "mean", "sqrtn" + and "sum" are supported. + "sum" computes the weighted sum of the embedding results for each row. + "mean" is the weighted sum divided by the total weight. + "sqrtn" is the weighted sum divided by the square root of the sum of the + squares of the weights. + +##### Returns: + + A dense tensor representing the combined embeddings for the + sparse ids. For each row in the dense tensor represented by sp_ids, the op + looks up the embeddings for all ids in that row, multiplies them by the + corresponding weight, and combines these embeddings as specified. + + In other words, if + shape(combined params) = [p0, p1, ..., pm] + and + shape(sp_ids) = shape(sp_weights) = [d0, d1, ..., dn] + then + shape(output) = [d0, d1, ..., dn-1, p1, ..., pm]. + + For instance, if params is a 10x20 matrix, and sp_ids / sp_weights are + + [0, 0]: id 1, weight 2.0 + [0, 1]: id 3, weight 0.5 + [1, 0]: id 0, weight 1.0 + [2, 3]: id 1, weight 3.0 + + with combiner="mean", then the output will be a 3x20 matrix where + output[0, :] = (params[1, :] * 2.0 + params[3, :] * 0.5) / (2.0 + 0.5) + output[1, :] = params[0, :] * 1.0 + output[2, :] = params[1, :] * 3.0 + +##### Raises: + + +* `TypeError`: If sp_ids is not a SparseTensor, or if sp_weights is neither + None nor SparseTensor. +* `ValueError`: If combiner is not one of {"mean", "sqrtn", "sum"}. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.fixed_unigram_candidate_sampler.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.fixed_unigram_candidate_sampler.md new file mode 100644 index 0000000000..ad9b059e42 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.fixed_unigram_candidate_sampler.md @@ -0,0 +1,75 @@ +### `tf.nn.fixed_unigram_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, vocab_file='', distortion=1.0, num_reserved_ids=0, num_shards=1, shard=0, unigrams=(), seed=None, name=None)` {#fixed_unigram_candidate_sampler} + +Samples a set of classes using the provided (fixed) base distribution. + +This operation randomly samples a tensor of sampled classes +(`sampled_candidates`) from the range of integers `[0, range_max)`. + +The elements of `sampled_candidates` are drawn without replacement +(if `unique=True`) or with replacement (if `unique=False`) from +the base distribution. + +The base distribution is read from a file or passed in as an +in-memory array. There is also an option to skew the distribution by +applying a distortion power to the weights. + +In addition, this operation returns tensors `true_expected_count` +and `sampled_expected_count` representing the number of times each +of the target classes (`true_classes`) and the sampled +classes (`sampled_candidates`) is expected to occur in an average +tensor of sampled classes. These values correspond to `Q(y|x)` +defined in [this +document](http://www.tensorflow.org/extras/candidate_sampling.pdf). +If `unique=True`, then these are post-rejection probabilities and we +compute them approximately. + +##### Args: + + +* `true_classes`: A `Tensor` of type `int64` and shape `[batch_size, + num_true]`. The target classes. +* `num_true`: An `int`. The number of target classes per training example. +* `num_sampled`: An `int`. The number of classes to randomly sample per batch. +* `unique`: A `bool`. Determines whether all sampled classes in a batch are + unique. +* `range_max`: An `int`. The number of possible classes. +* `vocab_file`: Each valid line in this file (which should have a CSV-like + format) corresponds to a valid word ID. IDs are in sequential order, + starting from num_reserved_ids. The last entry in each line is expected + to be a value corresponding to the count or relative probability. Exactly + one of `vocab_file` and `unigrams` needs to be passed to this operation. +* `distortion`: The distortion is used to skew the unigram probability + distribution. Each weight is first raised to the distortion's power + before adding to the internal unigram distribution. As a result, + `distortion = 1.0` gives regular unigram sampling (as defined by the vocab + file), and `distortion = 0.0` gives a uniform distribution. +* `num_reserved_ids`: Optionally some reserved IDs can be added in the range + `[0, num_reserved_ids]` by the users. One use case is that a special + unknown word token is used as ID 0. These IDs will have a sampling + probability of 0. +* `num_shards`: A sampler can be used to sample from a subset of the original + range in order to speed up the whole computation through parallelism. This + parameter (together with `shard`) indicates the number of partitions that + are being used in the overall computation. +* `shard`: A sampler can be used to sample from a subset of the original range + in order to speed up the whole computation through parallelism. This + parameter (together with `num_shards`) indicates the particular partition + number of the operation, when partitioning is being used. +* `unigrams`: A list of unigram counts or probabilities, one per ID in + sequential order. Exactly one of `vocab_file` and `unigrams` should be + passed to this operation. +* `seed`: An `int`. An operation-specific seed. Default is 0. +* `name`: A name for the operation (optional). + +##### Returns: + + +* `sampled_candidates`: A tensor of type `int64` and shape `[num_sampled]`. + The sampled classes. +* `true_expected_count`: A tensor of type `float`. Same shape as + `true_classes`. The expected counts under the sampling distribution + of each of `true_classes`. +* `sampled_expected_count`: A tensor of type `float`. Same shape as + `sampled_candidates`. The expected counts under the sampling distribution + of each of `sampled_candidates`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.l2_normalize.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.l2_normalize.md new file mode 100644 index 0000000000..fdcdd71e20 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.l2_normalize.md @@ -0,0 +1,24 @@ +### `tf.nn.l2_normalize(x, dim, epsilon=1e-12, name=None)` {#l2_normalize} + +Normalizes along dimension `dim` using an L2 norm. + +For a 1-D tensor with `dim = 0`, computes + + output = x / sqrt(max(sum(x**2), epsilon)) + +For `x` with more dimensions, independently normalizes each 1-D slice along +dimension `dim`. + +##### Args: + + +* `x`: A `Tensor`. +* `dim`: Dimension along which to normalize. +* `epsilon`: A lower bound value for the norm. Will use `sqrt(epsilon)` as the + divisor if `norm < sqrt(epsilon)`. +* `name`: A name for this operation (optional). + +##### Returns: + + A `Tensor` with the same shape as `x`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.local_response_normalization.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.local_response_normalization.md deleted file mode 100644 index 349e34fa73..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.local_response_normalization.md +++ /dev/null @@ -1,34 +0,0 @@ -### `tf.nn.local_response_normalization(input, depth_radius=None, bias=None, alpha=None, beta=None, name=None)` {#local_response_normalization} - -Local Response Normalization. - -The 4-D `input` tensor is treated as a 3-D array of 1-D vectors (along the last -dimension), and each vector is normalized independently. Within a given vector, -each component is divided by the weighted, squared sum of inputs within -`depth_radius`. In detail, - - sqr_sum[a, b, c, d] = - sum(input[a, b, c, d - depth_radius : d + depth_radius + 1] ** 2) - output = input / (bias + alpha * sqr_sum) ** beta - -For details, see [Krizhevsky et al., ImageNet classification with deep -convolutional neural networks (NIPS 2012)] -(http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks). - -##### Args: - - -* `input`: A `Tensor` of type `float32`. 4-D. -* `depth_radius`: An optional `int`. Defaults to `5`. - 0-D. Half-width of the 1-D normalization window. -* `bias`: An optional `float`. Defaults to `1`. - An offset (usually positive to avoid dividing by 0). -* `alpha`: An optional `float`. Defaults to `1`. - A scale factor, usually positive. -* `beta`: An optional `float`. Defaults to `0.5`. An exponent. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `float32`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.log_uniform_candidate_sampler.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.log_uniform_candidate_sampler.md deleted file mode 100644 index baf9f9d421..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.log_uniform_candidate_sampler.md +++ /dev/null @@ -1,56 +0,0 @@ -### `tf.nn.log_uniform_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, seed=None, name=None)` {#log_uniform_candidate_sampler} - -Samples a set of classes using a log-uniform (Zipfian) base distribution. - -This operation randomly samples a tensor of sampled classes -(`sampled_candidates`) from the range of integers `[0, range_max)`. - -The elements of `sampled_candidates` are drawn without replacement -(if `unique=True`) or with replacement (if `unique=False`) from -the base distribution. - -The base distribution for this operation is an approximately log-uniform -or Zipfian distribution: - -`P(class) = (log(class + 2) - log(class + 1)) / log(range_max + 1)` - -This sampler is useful when the target classes approximately follow such -a distribution - for example, if the classes represent words in a lexicon -sorted in decreasing order of frequency. If your classes are not ordered by -decreasing frequency, do not use this op. - -In addition, this operation returns tensors `true_expected_count` -and `sampled_expected_count` representing the number of times each -of the target classes (`true_classes`) and the sampled -classes (`sampled_candidates`) is expected to occur in an average -tensor of sampled classes. These values correspond to `Q(y|x)` -defined in [this -document](http://www.tensorflow.org/extras/candidate_sampling.pdf). -If `unique=True`, then these are post-rejection probabilities and we -compute them approximately. - -##### Args: - - -* `true_classes`: A `Tensor` of type `int64` and shape `[batch_size, - num_true]`. The target classes. -* `num_true`: An `int`. The number of target classes per training example. -* `num_sampled`: An `int`. The number of classes to randomly sample per batch. -* `unique`: A `bool`. Determines whether all sampled classes in a batch are - unique. -* `range_max`: An `int`. The number of possible classes. -* `seed`: An `int`. An operation-specific seed. Default is 0. -* `name`: A name for the operation (optional). - -##### Returns: - - -* `sampled_candidates`: A tensor of type `int64` and shape `[num_sampled]`. - The sampled classes. -* `true_expected_count`: A tensor of type `float`. Same shape as - `true_classes`. The expected counts under the sampling distribution - of each of `true_classes`. -* `sampled_expected_count`: A tensor of type `float`. Same shape as - `sampled_candidates`. The expected counts under the sampling distribution - of each of `sampled_candidates`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.normalize_moments.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.normalize_moments.md new file mode 100644 index 0000000000..d7a6b9cab4 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.normalize_moments.md @@ -0,0 +1,20 @@ +### `tf.nn.normalize_moments(counts, mean_ss, variance_ss, shift, name=None)` {#normalize_moments} + +Calculate the mean and variance of based on the sufficient statistics. + +##### Args: + + +* `counts`: A `Tensor` containing a the total count of the data (one value). +* `mean_ss`: A `Tensor` containing the mean sufficient statistics: the (possibly + shifted) sum of the elements to average over. +* `variance_ss`: A `Tensor` containing the variance sufficient statistics: the + (possibly shifted) squared sum of the data to compute the variance over. +* `shift`: A `Tensor` containing the value by which the data is shifted for + numerical stability, or `None` if no shift was performed. +* `name`: Name used to scope the operations that compute the moments. + +##### Returns: + + Two `Tensor` objects: `mean` and `variance`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.sampled_softmax_loss.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.sampled_softmax_loss.md deleted file mode 100644 index 6d22f67352..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.sampled_softmax_loss.md +++ /dev/null @@ -1,49 +0,0 @@ -### `tf.nn.sampled_softmax_loss(weights, biases, inputs, labels, num_sampled, num_classes, num_true=1, sampled_values=None, remove_accidental_hits=True, partition_strategy='mod', name='sampled_softmax_loss')` {#sampled_softmax_loss} - -Computes and returns the sampled softmax training loss. - -This is a faster way to train a softmax classifier over a huge number of -classes. - -This operation is for training only. It is generally an underestimate of -the full softmax loss. - -At inference time, you can compute full softmax probabilities with the -expression `tf.nn.softmax(tf.matmul(inputs, tf.transpose(weights)) + biases)`. - -See our [Candidate Sampling Algorithms Reference] -(../../extras/candidate_sampling.pdf) - -Also see Section 3 of [Jean et al., 2014](http://arxiv.org/abs/1412.2007) -([pdf](http://arxiv.org/pdf/1412.2007.pdf)) for the math. - -##### Args: - - -* `weights`: A `Tensor` of shape `[num_classes, dim]`, or a list of `Tensor` - objects whose concatenation along dimension 0 has shape - [num_classes, dim]. The (possibly-sharded) class embeddings. -* `biases`: A `Tensor` of shape `[num_classes]`. The class biases. -* `inputs`: A `Tensor` of shape `[batch_size, dim]`. The forward - activations of the input network. -* `labels`: A `Tensor` of type `int64` and shape `[batch_size, - num_true]`. The target classes. Note that this format differs from - the `labels` argument of `nn.softmax_cross_entropy_with_logits`. -* `num_sampled`: An `int`. The number of classes to randomly sample per batch. -* `num_classes`: An `int`. The number of possible classes. -* `num_true`: An `int`. The number of target classes per training example. -* `sampled_values`: a tuple of (`sampled_candidates`, `true_expected_count`, - `sampled_expected_count`) returned by a `*_candidate_sampler` function. - (if None, we default to `log_uniform_candidate_sampler`) -* `remove_accidental_hits`: A `bool`. whether to remove "accidental hits" - where a sampled class equals one of the target classes. Default is - True. -* `partition_strategy`: A string specifying the partitioning strategy, relevant - if `len(weights) > 1`. Currently `"div"` and `"mod"` are supported. - Default is `"mod"`. See `tf.nn.embedding_lookup` for more details. -* `name`: A name for the operation (optional). - -##### Returns: - - A `batch_size` 1-D tensor of per-example sampled softmax losses. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.separable_conv2d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.separable_conv2d.md new file mode 100644 index 0000000000..f4be03303f --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.separable_conv2d.md @@ -0,0 +1,40 @@ +### `tf.nn.separable_conv2d(input, depthwise_filter, pointwise_filter, strides, padding, name=None)` {#separable_conv2d} + +2-D convolution with separable filters. + +Performs a depthwise convolution that acts separately on channels followed by +a pointwise convolution that mixes channels. Note that this is separability +between dimensions `[1, 2]` and `3`, not spatial separability between +dimensions `1` and `2`. + +In detail, + + output[b, i, j, k] = sum_{di, dj, q, r] + input[b, strides[1] * i + di, strides[2] * j + dj, q] * + depthwise_filter[di, dj, q, r] * + pointwise_filter[0, 0, q * channel_multiplier + r, k] + +`strides` controls the strides for the depthwise convolution only, since +the pointwise convolution has implicit strides of `[1, 1, 1, 1]`. Must have +`strides[0] = strides[3] = 1`. For the most common case of the same +horizontal and vertical strides, `strides = [1, stride, stride, 1]`. + +##### Args: + + +* `input`: 4-D `Tensor` with shape `[batch, in_height, in_width, in_channels]`. +* `depthwise_filter`: 4-D `Tensor` with shape + `[filter_height, filter_width, in_channels, channel_multiplier]`. + Contains `in_channels` convolutional filters of depth 1. +* `pointwise_filter`: 4-D `Tensor` with shape + `[1, 1, channel_multiplier * in_channels, out_channels]`. Pointwise + filter to mix channels after `depthwise_filter` has convolved spatially. +* `strides`: 1-D of size 4. The strides for the depthwise convolution for + each dimension of `input`. +* `padding`: A string, either `'VALID'` or `'SAME'`. The padding algorithm. +* `name`: A name for this operation (optional). + +##### Returns: + + A 4-D `Tensor` of shape `[batch, out_height, out_width, out_channels]`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.ones_like.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.ones_like.md deleted file mode 100644 index 2c9b04ceca..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.ones_like.md +++ /dev/null @@ -1,28 +0,0 @@ -### `tf.ones_like(tensor, dtype=None, name=None)` {#ones_like} - -Creates a tensor with all elements set to 1. - -Given a single tensor (`tensor`), this operation returns a tensor of the same -type and shape as `tensor` with all elements set to 1. Optionally, you can -specify a new type (`dtype`) for the returned tensor. - -For example: - -```python -# 'tensor' is [[1, 2, 3], [4, 5, 6]] -tf.ones_like(tensor) ==> [[1, 1, 1], [1, 1, 1]] -``` - -##### Args: - - -* `tensor`: A `Tensor`. -* `dtype`: A type for the returned `Tensor`. Must be `float32`, `float64`, - `int8`, `int16`, `int32`, `int64`, `uint8`, or `complex64`. - -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` with all elements set to 1. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.pad.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.pad.md deleted file mode 100644 index 7fbf7442c7..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.pad.md +++ /dev/null @@ -1,57 +0,0 @@ -### `tf.pad(tensor, paddings, mode='CONSTANT', name=None)` {#pad} - -Pads a tensor. - -This operation pads a `tensor` according to the `paddings` you specify. -`paddings` is an integer tensor with shape `[n, 2]`, where n is the rank of -`tensor`. For each dimension D of `input`, `paddings[D, 0]` indicates how -many values to add before the contents of `tensor` in that dimension, and -`paddings[D, 1]` indicates how many values to add after the contents of -`tensor` in that dimension. If `mode` is "REFLECT" then both `paddings[D, 0]` -and `paddings[D, 1]` must be no greater than `tensor.dim_size(D) - 1`. If -`mode` is "SYMMETRIC" then both `paddings[D, 0]` and `paddings[D, 1]` must be -no greater than `tensor.dim_size(D)`. - -The padded size of each dimension D of the output is: - -`paddings[D, 0] + tensor.dim_size(D) + paddings[D, 1]` - -For example: - -```python -# 't' is [[1, 2, 3], [4, 5, 6]]. -# 'paddings' is [[1, 1,], [2, 2]]. -# rank of 't' is 2. -pad(t, paddings, "CONSTANT") ==> [[0, 0, 0, 0, 0, 0, 0], - [0, 0, 1, 2, 3, 0, 0], - [0, 0, 4, 5, 6, 0, 0], - [0, 0, 0, 0, 0, 0, 0]] - -pad(t, paddings, "REFLECT") ==> [[6, 5, 4, 5, 6, 5, 4], - [3, 2, 1, 2, 3, 2, 1], - [6, 5, 4, 5, 6, 5, 4], - [3, 2, 1, 2, 3, 2, 1]] - -pad(t, paddings, "SYMMETRIC") ==> [[2, 1, 1, 2, 3, 3, 2], - [2, 1, 1, 2, 3, 3, 2], - [5, 4, 4, 5, 6, 6, 5], - [5, 4, 4, 5, 6, 6, 5]] -``` - -##### Args: - - -* `tensor`: A `Tensor`. -* `paddings`: A `Tensor` of type `int32`. -* `mode`: One of "CONSTANT", "REFLECT", or "SYMMETRIC". -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `tensor`. - -##### Raises: - - -* `ValueError`: When mode is not one of "CONSTANT", "REFLECT", or "SYMMETRIC". - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.parse_single_example.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.parse_single_example.md new file mode 100644 index 0000000000..e0bce09137 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.parse_single_example.md @@ -0,0 +1,35 @@ +### `tf.parse_single_example(serialized, features, name=None, example_names=None)` {#parse_single_example} + +Parses a single `Example` proto. + +Similar to `parse_example`, except: + +For dense tensors, the returned `Tensor` is identical to the output of +`parse_example`, except there is no batch dimension, the output shape is the +same as the shape given in `dense_shape`. + +For `SparseTensor`s, the first (batch) column of the indices matrix is removed +(the indices matrix is a column vector), the values vector is unchanged, and +the first (`batch_size`) entry of the shape vector is removed (it is now a +single element vector). + +##### Args: + + +* `serialized`: A scalar string Tensor, a single serialized Example. + See `_parse_single_example_raw` documentation for more details. +* `features`: A `dict` mapping feature keys to `FixedLenFeature` or + `VarLenFeature` values. +* `name`: A name for this operation (optional). +* `example_names`: (Optional) A scalar string Tensor, the associated name. + See `_parse_single_example_raw` documentation for more details. + +##### Returns: + + A `dict` mapping feature keys to `Tensor` and `SparseTensor` values. + +##### Raises: + + +* `ValueError`: if any feature is invalid. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.placeholder_with_default.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.placeholder_with_default.md deleted file mode 100644 index 2719b876f1..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.placeholder_with_default.md +++ /dev/null @@ -1,17 +0,0 @@ -### `tf.placeholder_with_default(input, shape, name=None)` {#placeholder_with_default} - -A placeholder op that passes though `input` when its output is not fed. - -##### Args: - - -* `input`: A `Tensor`. The default value to produce when `output` is not fed. -* `shape`: A `tf.TensorShape` or list of `ints`. - The (possibly partial) shape of the tensor. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `input`. - A placeholder tensor that defaults to `input` if it is not fed. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.read_file.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.read_file.md deleted file mode 100644 index 3c0ad3652a..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.read_file.md +++ /dev/null @@ -1,14 +0,0 @@ -### `tf.read_file(filename, name=None)` {#read_file} - -Reads and outputs the entire contents of the input filename. - -##### Args: - - -* `filename`: A `Tensor` of type `string`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `string`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.reduce_mean.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.reduce_mean.md new file mode 100644 index 0000000000..af446b6c53 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.reduce_mean.md @@ -0,0 +1,35 @@ +### `tf.reduce_mean(input_tensor, reduction_indices=None, keep_dims=False, name=None)` {#reduce_mean} + +Computes the mean of elements across dimensions of a tensor. + +Reduces `input_tensor` along the dimensions given in `reduction_indices`. +Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each +entry in `reduction_indices`. If `keep_dims` is true, the reduced dimensions +are retained with length 1. + +If `reduction_indices` has no entries, all dimensions are reduced, and a +tensor with a single element is returned. + +For example: + +```python +# 'x' is [[1., 1.] +# [2., 2.]] +tf.reduce_mean(x) ==> 1.5 +tf.reduce_mean(x, 0) ==> [1.5, 1.5] +tf.reduce_mean(x, 1) ==> [1., 2.] +``` + +##### Args: + + +* `input_tensor`: The tensor to reduce. Should have numeric type. +* `reduction_indices`: The dimensions to reduce. If `None` (the default), + reduces all dimensions. +* `keep_dims`: If true, retains reduced dimensions with length 1. +* `name`: A name for the operation (optional). + +##### Returns: + + The reduced tensor. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.scalar_summary.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.scalar_summary.md new file mode 100644 index 0000000000..1e8c3479e4 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.scalar_summary.md @@ -0,0 +1,21 @@ +### `tf.scalar_summary(tags, values, collections=None, name=None)` {#scalar_summary} + +Outputs a `Summary` protocol buffer with scalar values. + +The input `tags` and `values` must have the same shape. The generated +summary has a summary value for each tag-value pair in `tags` and `values`. + +##### Args: + + +* `tags`: A `string` `Tensor`. Tags for the summaries. +* `values`: A real numeric Tensor. Values for the summaries. +* `collections`: Optional list of graph collections keys. The new summary op is + added to these collections. Defaults to `[GraphKeys.SUMMARIES]`. +* `name`: A name for the operation (optional). + +##### Returns: + + A scalar `Tensor` of type `string`. The serialized `Summary` protocol + buffer. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.scatter_add.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.scatter_add.md new file mode 100644 index 0000000000..a8f8b7a9b0 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.scatter_add.md @@ -0,0 +1,46 @@ +### `tf.scatter_add(ref, indices, updates, use_locking=None, name=None)` {#scatter_add} + +Adds sparse updates to a variable reference. + +This operation computes + + # Scalar indices + ref[indices, ...] += updates[...] + + # Vector indices (for each i) + ref[indices[i], ...] += updates[i, ...] + + # High rank indices (for each i, ..., j) + ref[indices[i, ..., j], ...] += updates[i, ..., j, ...] + +This operation outputs `ref` after the update is done. +This makes it easier to chain operations that need to use the reset value. + +Duplicate entries are handled correctly: if multiple `indices` reference +the same location, their contributions add. + +Requires `updates.shape = indices.shape + ref.shape[1:]`. + +
+ +
+ +##### Args: + + +* `ref`: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`. + Should be from a `Variable` node. +* `indices`: A `Tensor`. Must be one of the following types: `int32`, `int64`. + A tensor of indices into the first dimension of `ref`. +* `updates`: A `Tensor`. Must have the same type as `ref`. + A tensor of updated values to add to `ref`. +* `use_locking`: An optional `bool`. Defaults to `False`. + If True, the addition will be protected by a lock; + otherwise the behavior is undefined, but may exhibit less contention. +* `name`: A name for the operation (optional). + +##### Returns: + + Same as `ref`. Returned as a convenience for operations that want + to use the updated values after the update is done. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.scatter_sub.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.scatter_sub.md deleted file mode 100644 index 8f1afc42f6..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.scatter_sub.md +++ /dev/null @@ -1,44 +0,0 @@ -### `tf.scatter_sub(ref, indices, updates, use_locking=None, name=None)` {#scatter_sub} - -Subtracts sparse updates to a variable reference. - - # Scalar indices - ref[indices, ...] -= updates[...] - - # Vector indices (for each i) - ref[indices[i], ...] -= updates[i, ...] - - # High rank indices (for each i, ..., j) - ref[indices[i, ..., j], ...] -= updates[i, ..., j, ...] - -This operation outputs `ref` after the update is done. -This makes it easier to chain operations that need to use the reset value. - -Duplicate entries are handled correctly: if multiple `indices` reference -the same location, their (negated) contributions add. - -Requires `updates.shape = indices.shape + ref.shape[1:]`. - -
- -
- -##### Args: - - -* `ref`: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`. - Should be from a `Variable` node. -* `indices`: A `Tensor`. Must be one of the following types: `int32`, `int64`. - A tensor of indices into the first dimension of `ref`. -* `updates`: A `Tensor`. Must have the same type as `ref`. - A tensor of updated values to subtract from `ref`. -* `use_locking`: An optional `bool`. Defaults to `False`. - If True, the subtraction will be protected by a lock; - otherwise the behavior is undefined, but may exhibit less contention. -* `name`: A name for the operation (optional). - -##### Returns: - - Same as `ref`. Returned as a convenience for operations that want - to use the updated values after the update is done. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.segment_max.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.segment_max.md new file mode 100644 index 0000000000..c9d7a28900 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.segment_max.md @@ -0,0 +1,30 @@ +### `tf.segment_max(data, segment_ids, name=None)` {#segment_max} + +Computes the maximum along segments of a tensor. + +Read [the section on Segmentation](../../api_docs/python/math_ops.md#segmentation) +for an explanation of segments. + +Computes a tensor such that +\\(output_i = \max_j(data_j)\\) where `max` is over `j` such +that `segment_ids[j] == i`. + +
+ +
+ +##### Args: + + +* `data`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`. +* `segment_ids`: A `Tensor`. Must be one of the following types: `int32`, `int64`. + A 1-D tensor whose rank is equal to the rank of `data`'s + first dimension. Values should be sorted and can be repeated. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `data`. + Has same shape as data, except for dimension 0 which + has size `k`, the number of segments. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.segment_mean.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.segment_mean.md new file mode 100644 index 0000000000..5d901859a9 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.segment_mean.md @@ -0,0 +1,32 @@ +### `tf.segment_mean(data, segment_ids, name=None)` {#segment_mean} + +Computes the mean along segments of a tensor. + +Read [the section on +Segmentation](../../api_docs/python/math_ops.md#segmentation) for an explanation +of segments. + +Computes a tensor such that +\\(output_i = \frac{\sum_j data_j}{N}\\) where `mean` is +over `j` such that `segment_ids[j] == i` and `N` is the total number of +values summed. + +
+ +
+ +##### Args: + + +* `data`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`. +* `segment_ids`: A `Tensor`. Must be one of the following types: `int32`, `int64`. + A 1-D tensor whose rank is equal to the rank of `data`'s + first dimension. Values should be sorted and can be repeated. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `data`. + Has same shape as data, except for dimension 0 which + has size `k`, the number of segments. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.segment_sum.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.segment_sum.md deleted file mode 100644 index eeffe1601a..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.segment_sum.md +++ /dev/null @@ -1,30 +0,0 @@ -### `tf.segment_sum(data, segment_ids, name=None)` {#segment_sum} - -Computes the sum along segments of a tensor. - -Read [the section on Segmentation](../../api_docs/python/math_ops.md#segmentation) -for an explanation of segments. - -Computes a tensor such that -\\(output_i = \sum_j data_j\\) where sum is over `j` such -that `segment_ids[j] == i`. - -
- -
- -##### Args: - - -* `data`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`. -* `segment_ids`: A `Tensor`. Must be one of the following types: `int32`, `int64`. - A 1-D tensor whose rank is equal to the rank of `data`'s - first dimension. Values should be sorted and can be repeated. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `data`. - Has same shape as data, except for dimension 0 which - has size `k`, the number of segments. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_add.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_add.md deleted file mode 100644 index 4835ae70e5..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_add.md +++ /dev/null @@ -1,55 +0,0 @@ -### `tf.sparse_add(a, b, thresh=0)` {#sparse_add} - -Adds two tensors, at least one of each is a `SparseTensor`. - -If one `SparseTensor` and one `Tensor` are passed in, returns a `Tensor`. If -both arguments are `SparseTensor`s, this returns a `SparseTensor`. The order -of arguments does not matter. Use vanilla `tf.add()` for adding two dense -`Tensor`s. - -The indices of any input `SparseTensor` are assumed ordered in standard -lexicographic order. If this is not the case, before this step run -`SparseReorder` to restore index ordering. - -If both arguments are sparse, we perform "clipping" as follows. By default, -if two values sum to zero at some index, the output `SparseTensor` would still -include that particular location in its index, storing a zero in the -corresponding value slot. To override this, callers can specify `thresh`, -indicating that if the sum has a magnitude strictly smaller than `thresh`, its -corresponding value and index would then not be included. In particular, -`thresh == 0.0` (default) means everything is kept and actual thresholding -happens only for a positive value. - -For example, suppose the logical sum of two sparse operands is (densified): - - [ 2] - [.1 0] - [ 6 -.2] - -Then, - - - thresh == 0 (the default): all 5 index/value pairs will be returned. - - thresh == 0.11: only .1 and 0 will vanish, and the remaining three - index/value pairs will be returned. - - thresh == 0.21: .1, 0, and -.2 will vanish. - -##### Args: - - -* `a`: The first operand; `SparseTensor` or `Tensor`. -* `b`: The second operand; `SparseTensor` or `Tensor`. At least one operand - must be sparse. -* `thresh`: A 0-D `Tensor`. The magnitude threshold that determines if an - output value/index pair takes space. Its dtype should match that of the - values if they are real; if the latter are complex64/complex128, then the - dtype should be float32/float64, correspondingly. - -##### Returns: - - A `SparseTensor` or a `Tensor`, representing the sum. - -##### Raises: - - -* `TypeError`: If both `a` and `b` are `Tensor`s. Use `tf.add()` instead. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_concat.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_concat.md deleted file mode 100644 index 8d05472e34..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_concat.md +++ /dev/null @@ -1,100 +0,0 @@ -### `tf.sparse_concat(concat_dim, sp_inputs, name=None, expand_nonconcat_dim=False)` {#sparse_concat} - -Concatenates a list of `SparseTensor` along the specified dimension. - -Concatenation is with respect to the dense versions of each sparse input. -It is assumed that each inputs is a `SparseTensor` whose elements are ordered -along increasing dimension number. - -If expand_nonconcat_dim is False, all inputs' shapes must match, except for -the concat dimension. If expand_nonconcat_dim is True, then inputs' shapes are -allowd to vary among all inputs. - -The `indices`, `values`, and `shapes` lists must have the same length. - -If expand_nonconcat_dim is False, then the output shape is identical to the -inputs', except along the concat dimension, where it is the sum of the inputs' -sizes along that dimension. - -If expand_nonconcat_dim is True, then the output shape along the non-concat -dimensions will be expand to be the largest among all inputs, and it is the -sum of the inputs sizes along the concat dimension. - -The output elements will be resorted to preserve the sort order along -increasing dimension number. - -This op runs in `O(M log M)` time, where `M` is the total number of non-empty -values across all inputs. This is due to the need for an internal sort in -order to concatenate efficiently across an arbitrary dimension. - -For example, if `concat_dim = 1` and the inputs are - - sp_inputs[0]: shape = [2, 3] - [0, 2]: "a" - [1, 0]: "b" - [1, 1]: "c" - - sp_inputs[1]: shape = [2, 4] - [0, 1]: "d" - [0, 2]: "e" - -then the output will be - - shape = [2, 7] - [0, 2]: "a" - [0, 4]: "d" - [0, 5]: "e" - [1, 0]: "b" - [1, 1]: "c" - -Graphically this is equivalent to doing - - [ a] concat [ d e ] = [ a d e ] - [b c ] [ ] [b c ] - -Another example, if 'concat_dim = 1' and the inputs are - - sp_inputs[0]: shape = [3, 3] - [0, 2]: "a" - [1, 0]: "b" - [2, 1]: "c" - - sp_inputs[1]: shape = [2, 4] - [0, 1]: "d" - [0, 2]: "e" - -if expand_nonconcat_dim = False, this will result in an error. But if -expand_nonconcat_dim = True, this will result in: - - shape = [3, 7] - [0, 2]: "a" - [0, 4]: "d" - [0, 5]: "e" - [1, 0]: "b" - [2, 1]: "c" - -Graphically this is equivalent to doing - - [ a] concat [ d e ] = [ a d e ] - [b ] [ ] [b ] - [ c ] [ c ] - - -##### Args: - - -* `concat_dim`: Dimension to concatenate along. -* `sp_inputs`: List of `SparseTensor` to concatenate. -* `name`: A name prefix for the returned tensors (optional). -* `expand_nonconcat_dim`: Whether to allow the expansion in the non-concat - dimensions. Defaulted to False. - -##### Returns: - - A `SparseTensor` with the concatenated output. - -##### Raises: - - -* `TypeError`: If `sp_inputs` is not a list of `SparseTensor`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_fill_empty_rows.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_fill_empty_rows.md deleted file mode 100644 index 3ea1697f3d..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_fill_empty_rows.md +++ /dev/null @@ -1,54 +0,0 @@ -### `tf.sparse_fill_empty_rows(sp_input, default_value, name=None)` {#sparse_fill_empty_rows} - -Fills empty rows in the input 2-D `SparseTensor` with a default value. - -This op adds entries with the specified `default_value` at index -`[row, 0]` for any row in the input that does not already have a value. - -For example, suppose `sp_input` has shape `[5, 6]` and non-empty values: - - [0, 1]: a - [0, 3]: b - [2, 0]: c - [3, 1]: d - -Rows 1 and 4 are empty, so the output will be of shape `[5, 6]` with values: - - [0, 1]: a - [0, 3]: b - [1, 0]: default_value - [2, 0]: c - [3, 1]: d - [4, 0]: default_value - -Note that the input may have empty columns at the end, with no effect on -this op. - -The output `SparseTensor` will be in row-major order and will have the -same shape as the input. - -This op also returns an indicator vector such that - - empty_row_indicator[i] = True iff row i was an empty row. - -##### Args: - - -* `sp_input`: A `SparseTensor` with shape `[N, M]`. -* `default_value`: The value to fill for empty rows, with the same type as - `sp_input.` -* `name`: A name prefix for the returned tensors (optional) - -##### Returns: - - -* `sp_ordered_output`: A `SparseTensor` with shape `[N, M]`, and with all empty - rows filled in with `default_value`. -* `empty_row_indicator`: A bool vector of length `N` indicating whether each - input row was empty. - -##### Raises: - - -* `TypeError`: If `sp_input` is not a `SparseTensor`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_merge.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_merge.md new file mode 100644 index 0000000000..38742123d6 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_merge.md @@ -0,0 +1,73 @@ +### `tf.sparse_merge(sp_ids, sp_values, vocab_size, name=None)` {#sparse_merge} + +Combines a batch of feature ids and values into a single `SparseTensor`. + +The most common use case for this function occurs when feature ids and +their corresponding values are stored in `Example` protos on disk. +`parse_example` will return a batch of ids and a batch of values, and this +function joins them into a single logical `SparseTensor` for use in +functions such as `sparse_tensor_dense_matmul`, `sparse_to_dense`, etc. + +The `SparseTensor` returned by this function has the following properties: + + - `indices` is equivalent to `sp_ids.indices` with the last + dimension discarded and replaced with `sp_ids.values`. + - `values` is simply `sp_values.values`. + - If `sp_ids.shape = [D0, D1, ..., Dn, K]`, then + `output.shape = [D0, D1, ..., Dn, vocab_size]`. + +For example, consider the following feature vectors: + + vector1 = [-3, 0, 0, 0, 0, 0] + vector2 = [ 0, 1, 0, 4, 1, 0] + vector3 = [ 5, 0, 0, 9, 0, 0] + +These might be stored sparsely in the following Example protos by storing +only the feature ids (column number if the vectors are treated as a matrix) +of the non-zero elements and the corresponding values: + + examples = [Example(features={ + "ids": Feature(int64_list=Int64List(value=[0])), + "values": Feature(float_list=FloatList(value=[-3]))}), + Example(features={ + "ids": Feature(int64_list=Int64List(value=[1, 4, 3])), + "values": Feature(float_list=FloatList(value=[1, 1, 4]))}), + Example(features={ + "ids": Feature(int64_list=Int64List(value=[0, 3])), + "values": Feature(float_list=FloatList(value=[5, 9]))})] + +The result of calling parse_example on these examples will produce a +dictionary with entries for "ids" and "values". Passing those two objects +to this function along with vocab_size=6, will produce a `SparseTensor` that +sparsely represents all three instances. Namely, the `indices` property will +contain the coordinates of the non-zero entries in the feature matrix (the +first dimension is the row number in the matrix, i.e., the index within the +batch, and the second dimension is the column number, i.e., the feature id); +`values` will contain the actual values. `shape` will be the shape of the +original matrix, i.e., (3, 6). For our example above, the output will be +equal to: + + SparseTensor(indices=[[0, 0], [1, 1], [1, 3], [1, 4], [2, 0], [2, 3]], + values=[-3, 1, 4, 1, 5, 9], + shape=[3, 6]) + +##### Args: + + +* `sp_ids`: A `SparseTensor` with `values` property of type `int32` + or `int64`. +* `sp_values`: A`SparseTensor` of any type. +* `vocab_size`: A scalar `int64` Tensor (or Python int) containing the new size + of the last dimension, `all(0 <= sp_ids.values < vocab_size)`. +* `name`: A name prefix for the returned tensors (optional) + +##### Returns: + + A `SparseTensor` compactly representing a batch of feature ids and values, + useful for passing to functions that expect such a `SparseTensor`. + +##### Raises: + + +* `TypeError`: If `sp_ids` or `sp_values` are not a `SparseTensor`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_placeholder.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_placeholder.md deleted file mode 100644 index def6c8329d..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_placeholder.md +++ /dev/null @@ -1,43 +0,0 @@ -### `tf.sparse_placeholder(dtype, shape=None, name=None)` {#sparse_placeholder} - -Inserts a placeholder for a sparse tensor that will be always fed. - -**Important**: This sparse tensor will produce an error if evaluated. -Its value must be fed using the `feed_dict` optional argument to -`Session.run()`, `Tensor.eval()`, or `Operation.run()`. - -For example: - -```python -x = tf.sparse_placeholder(tf.float32) -y = tf.sparse_reduce_sum(x) - -with tf.Session() as sess: - print(sess.run(y)) # ERROR: will fail because x was not fed. - - indices = np.array([[3, 2, 0], [4, 5, 1]], dtype=np.int64) - values = np.array([1.0, 2.0], dtype=np.float32) - shape = np.array([7, 9, 2], dtype=np.int64) - print(sess.run(y, feed_dict={ - x: tf.SparseTensorValue(indices, values, shape)})) # Will succeed. - print(sess.run(y, feed_dict={ - x: (indices, values, shape)})) # Will succeed. - - sp = tf.SparseTensor(indices=indices, values=values, shape=shape) - sp_value = sp.eval(session) - print(sess.run(y, feed_dict={x: sp_value})) # Will succeed. -``` - -##### Args: - - -* `dtype`: The type of `values` elements in the tensor to be fed. -* `shape`: The shape of the tensor to be fed (optional). If the shape is not - specified, you can feed a sparse tensor of any shape. -* `name`: A name for prefixing the operations (optional). - -##### Returns: - - A `SparseTensor` that may be used as a handle for feeding a value, but not - evaluated directly. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_reset_shape.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_reset_shape.md deleted file mode 100644 index d0606cdc5d..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_reset_shape.md +++ /dev/null @@ -1,60 +0,0 @@ -### `tf.sparse_reset_shape(sp_input, new_shape=None)` {#sparse_reset_shape} - -Resets the shape of a `SparseTensor` with indices and values unchanged. - -If `new_shape` is None, returns a copy of `sp_input` with its shape reset -to the tight bounding box of `sp_input`. - -If `new_shape` is provided, then it must be larger or equal in all dimensions -compared to the shape of `sp_input`. When this condition is met, the returned -SparseTensor will have its shape reset to `new_shape` and its indices and -values unchanged from that of `sp_input.` - -For example: - - Consider a `sp_input` with shape [2, 3, 5]: - - [0, 0, 1]: a - [0, 1, 0]: b - [0, 2, 2]: c - [1, 0, 3]: d - - - It is an error to set `new_shape` as [3, 7] since this represents a - rank-2 tensor while `sp_input` is rank-3. This is either a ValueError - during graph construction (if both shapes are known) or an OpError during - run time. - - - Setting `new_shape` as [2, 3, 6] will be fine as this shape is larger or - eqaul in every dimension compared to the original shape [2, 3, 5]. - - - On the other hand, setting new_shape as [2, 3, 4] is also an error: The - third dimension is smaller than the original shape [2, 3, 5] (and an - `InvalidArgumentError` will be raised). - - - If `new_shape` is None, the returned SparseTensor will have a shape - [2, 3, 4], which is the tight bounding box of `sp_input`. - -##### Args: - - -* `sp_input`: The input `SparseTensor`. -* `new_shape`: None or a vector representing the new shape for the returned - `SpraseTensor`. - -##### Returns: - - A `SparseTensor` indices and values unchanged from `input_sp`. Its shape is - `new_shape` if that is set. Otherwise it is the tight bounding box of - `input_sp` - -##### Raises: - - -* `TypeError`: If `sp_input` is not a `SparseTensor`. -* `ValueError`: If `new_shape` represents a tensor with a different rank from - that of `sp_input` (if shapes are known when graph is constructed). -* `OpError`: - - If `new_shape` has dimension sizes that are too small. - - If shapes are not known during graph construction time, and during run - time it is found out that the ranks do not match. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_segment_mean.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_segment_mean.md new file mode 100644 index 0000000000..d95830b8a9 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_segment_mean.md @@ -0,0 +1,27 @@ +### `tf.sparse_segment_mean(data, indices, segment_ids, name=None)` {#sparse_segment_mean} + +Computes the mean along sparse segments of a tensor. + +Read [the section on +Segmentation](../../api_docs/python/math_ops.md#segmentation) for an explanation +of segments. + +Like `SegmentMean`, but `segment_ids` can have rank less than `data`'s first +dimension, selecting a subset of dimension 0, specified by `indices`. + +##### Args: + + +* `data`: A `Tensor`. Must be one of the following types: `float32`, `float64`. +* `indices`: A `Tensor` of type `int32`. + A 1-D tensor. Has same rank as `segment_ids`. +* `segment_ids`: A `Tensor` of type `int32`. + A 1-D tensor. Values should be sorted and can be repeated. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `data`. + Has same shape as data, except for dimension 0 which + has size `k`, the number of segments. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_softmax.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_softmax.md new file mode 100644 index 0000000000..cb54fd9452 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_softmax.md @@ -0,0 +1,51 @@ +### `tf.sparse_softmax(sp_input, name=None)` {#sparse_softmax} + +Applies softmax to a batched N-D `SparseTensor`. + +The inputs represent an N-D SparseTensor with logical shape `[..., B, C]` +(where `N >= 2`), and with indices sorted in the canonical lexicographic +order. + +This op is equivalent to applying the normal `tf.nn.softmax()` to each +innermost logical submatrix with shape `[B, C]`, but with the catch that *the +implicitly zero elements do not participate*. Specifically, the algorithm is +equivalent to: + + (1) Applies `tf.nn.softmax()` to a densified view of each innermost + submatrix with shape `[B, C]`, along the size-C dimension; + (2) Masks out the original implicitly-zero locations; + (3) Renormalizes the remaining elements. + +Hence, the `SparseTensor` result has exactly the same non-zero indices and +shape. + +Example: +```python +# First batch: +# [? e.] +# [1. ? ] +# Second batch: +# [e ? ] +# [e e ] +shape = [2, 2, 2] # 3-D SparseTensor +values = np.asarray([[[0., np.e], [1., 0.]], [[np.e, 0.], [np.e, np.e]]]) +indices = np.vstack(np.where(values)).astype(np.int64).T + +result = tf.sparse_softmax(tf.SparseTensor(indices, values, shape)) +# ...returning a 3-D SparseTensor, equivalent to: +# [? 1.] [1 ?] +# [1. ? ] and [.5 .5] +# where ? means implicitly zero. +``` + +##### Args: + + +* `sp_input`: N-D `SparseTensor`, where `N >= 2`. +* `name`: optional name of the operation. + +##### Returns: + + +* `output`: N-D `SparseTensor` representing the results. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.split.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.split.md new file mode 100644 index 0000000000..f6cc936328 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.split.md @@ -0,0 +1,29 @@ +### `tf.split(split_dim, num_split, value, name='split')` {#split} + +Splits a tensor into `num_split` tensors along one dimension. + +Splits `value` along dimension `split_dim` into `num_split` smaller tensors. +Requires that `num_split` evenly divide `value.shape[split_dim]`. + +For example: + +```python +# 'value' is a tensor with shape [5, 30] +# Split 'value' into 3 tensors along dimension 1 +split0, split1, split2 = tf.split(1, 3, value) +tf.shape(split0) ==> [5, 10] +``` + +##### Args: + + +* `split_dim`: A 0-D `int32` `Tensor`. The dimension along which to split. + Must be in the range `[0, rank(value))`. +* `num_split`: A Python integer. The number of ways to split. +* `value`: The `Tensor` to split. +* `name`: A name for the operation (optional). + +##### Returns: + + `num_split` `Tensor` objects resulting from splitting `value`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.square.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.square.md new file mode 100644 index 0000000000..649c763015 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.square.md @@ -0,0 +1,16 @@ +### `tf.square(x, name=None)` {#square} + +Computes square of x element-wise. + +I.e., \\(y = x * x = x^2\\). + +##### Args: + + +* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `x`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.test.assert_equal_graph_def.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.test.assert_equal_graph_def.md new file mode 100644 index 0000000000..653236cf9f --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.test.assert_equal_graph_def.md @@ -0,0 +1,20 @@ +### `tf.test.assert_equal_graph_def(actual, expected)` {#assert_equal_graph_def} + +Asserts that two `GraphDef`s are (mostly) the same. + +Compares two `GraphDef` protos for equality, ignoring versions and ordering of +nodes, attrs, and control inputs. Node names are used to match up nodes +between the graphs, so the naming of nodes must be consistent. + +##### Args: + + +* `actual`: The `GraphDef` we have. +* `expected`: The `GraphDef` we expected. + +##### Raises: + + +* `AssertionError`: If the `GraphDef`s do not match. +* `TypeError`: If either argument is not a `GraphDef`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.to_bfloat16.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.to_bfloat16.md deleted file mode 100644 index 3d55da1110..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.to_bfloat16.md +++ /dev/null @@ -1,19 +0,0 @@ -### `tf.to_bfloat16(x, name='ToBFloat16')` {#to_bfloat16} - -Casts a tensor to type `bfloat16`. - -##### Args: - - -* `x`: A `Tensor` or `SparseTensor`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` or `SparseTensor` with same shape as `x` with type `bfloat16`. - -##### Raises: - - -* `TypeError`: If `x` cannot be cast to the `bfloat16`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.to_int32.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.to_int32.md deleted file mode 100644 index fcc9db61cc..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.to_int32.md +++ /dev/null @@ -1,19 +0,0 @@ -### `tf.to_int32(x, name='ToInt32')` {#to_int32} - -Casts a tensor to type `int32`. - -##### Args: - - -* `x`: A `Tensor` or `SparseTensor`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` or `SparseTensor` with same shape as `x` with type `int32`. - -##### Raises: - - -* `TypeError`: If `x` cannot be cast to the `int32`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.Coordinator.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.Coordinator.md new file mode 100644 index 0000000000..f51c0721ff --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.Coordinator.md @@ -0,0 +1,223 @@ +A coordinator for threads. + +This class implements a simple mechanism to coordinate the termination of a +set of threads. + +#### Usage: + +```python +# Create a coordinator. +coord = Coordinator() +# Start a number of threads, passing the coordinator to each of them. +...start thread 1...(coord, ...) +...start thread N...(coord, ...) +# Wait for all the threads to terminate. +coord.join(threads) +``` + +Any of the threads can call `coord.request_stop()` to ask for all the threads +to stop. To cooperate with the requests, each thread must check for +`coord.should_stop()` on a regular basis. `coord.should_stop()` returns +`True` as soon as `coord.request_stop()` has been called. + +A typical thread running with a coordinator will do something like: + +```python +while not coord.should_stop(): + ...do some work... +``` + +#### Exception handling: + +A thread can report an exception to the coordinator as part of the +`should_stop()` call. The exception will be re-raised from the +`coord.join()` call. + +Thread code: + +```python +try: + while not coord.should_stop(): + ...do some work... +except Exception as e: + coord.request_stop(e) +``` + +Main code: + +```python +try: + ... + coord = Coordinator() + # Start a number of threads, passing the coordinator to each of them. + ...start thread 1...(coord, ...) + ...start thread N...(coord, ...) + # Wait for all the threads to terminate. + coord.join(threads) +except Exception as e: + ...exception that was passed to coord.request_stop() +``` + +To simplify the thread implementation, the Coordinator provides a +context handler `stop_on_exception()` that automatically requests a stop if +an exception is raised. Using the context handler the thread code above +can be written as: + +```python +with coord.stop_on_exception(): + while not coord.should_stop(): + ...do some work... +``` + +#### Grace period for stopping: + +After a thread has called `coord.request_stop()` the other threads have a +fixed time to stop, this is called the 'stop grace period' and defaults to 2 +minutes. If any of the threads is still alive after the grace period expires +`coord.join()` raises a RuntimeException reporting the laggards. + +```python +try: + ... + coord = Coordinator() + # Start a number of threads, passing the coordinator to each of them. + ...start thread 1...(coord, ...) + ...start thread N...(coord, ...) + # Wait for all the threads to terminate, give them 10s grace period + coord.join(threads, stop_grace_period_secs=10) +except RuntimeException: + ...one of the threads took more than 10s to stop after request_stop() + ...was called. +except Exception: + ...exception that was passed to coord.request_stop() +``` +- - - + +#### `tf.train.Coordinator.__init__()` {#Coordinator.__init__} + +Create a new Coordinator. + + +- - - + +#### `tf.train.Coordinator.clear_stop()` {#Coordinator.clear_stop} + +Clears the stop flag. + +After this is called, calls to `should_stop()` will return `False`. + + +- - - + +#### `tf.train.Coordinator.join(threads, stop_grace_period_secs=120)` {#Coordinator.join} + +Wait for threads to terminate. + +Blocks until all `threads` have terminated or `request_stop()` is called. + +After the threads stop, if an `exc_info` was passed to `request_stop`, that +exception is re-raised. + +Grace period handling: When `request_stop()` is called, threads are given +'stop_grace_period_secs' seconds to terminate. If any of them is still +alive after that period expires, a `RuntimeError` is raised. Note that if +an `exc_info` was passed to `request_stop()` then it is raised instead of +that `RuntimeError`. + +##### Args: + + +* `threads`: List of `threading.Threads`. The started threads to join. +* `stop_grace_period_secs`: Number of seconds given to threads to stop after + `request_stop()` has been called. + +##### Raises: + + +* `RuntimeError`: If any thread is still alive after `request_stop()` + is called and the grace period expires. + + +- - - + +#### `tf.train.Coordinator.request_stop(ex=None)` {#Coordinator.request_stop} + +Request that the threads stop. + +After this is called, calls to `should_stop()` will return `True`. + +Note: If an exception is being passed in, in must be in the context of +handling the exception (i.e. `try: ... except Exception as ex: ...`) and not +a newly created one. + +##### Args: + + +* `ex`: Optional `Exception`, or Python `exc_info` tuple as returned by + `sys.exc_info()`. If this is the first call to `request_stop()` the + corresponding exception is recorded and re-raised from `join()`. + + +- - - + +#### `tf.train.Coordinator.should_stop()` {#Coordinator.should_stop} + +Check if stop was requested. + +##### Returns: + + True if a stop was requested. + + +- - - + +#### `tf.train.Coordinator.stop_on_exception()` {#Coordinator.stop_on_exception} + +Context manager to request stop when an Exception is raised. + +Code that uses a coordinator must catch exceptions and pass +them to the `request_stop()` method to stop the other threads +managed by the coordinator. + +This context handler simplifies the exception handling. +Use it as follows: + +```python +with coord.stop_on_exception(): + # Any exception raised in the body of the with + # clause is reported to the coordinator before terminating + # the execution of the body. + ...body... +``` + +This is completely equivalent to the slightly longer code: + +```python +try: + ...body... +exception Exception as ex: + coord.request_stop(ex) +``` + +##### Yields: + + nothing. + + +- - - + +#### `tf.train.Coordinator.wait_for_stop(timeout=None)` {#Coordinator.wait_for_stop} + +Wait till the Coordinator is told to stop. + +##### Args: + + +* `timeout`: Float. Sleep for up to that many seconds waiting for + should_stop() to become True. + +##### Returns: + + True if the Coordinator is told stop, False if the timeout expired. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.Saver.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.Saver.md deleted file mode 100644 index 8bf255040e..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.Saver.md +++ /dev/null @@ -1,315 +0,0 @@ -Saves and restores variables. - -See [Variables](../../how_tos/variables/index.md) -for an overview of variables, saving and restoring. - -The `Saver` class adds ops to save and restore variables to and from -*checkpoints*. It also provides convenience methods to run these ops. - -Checkpoints are binary files in a proprietary format which map variable names -to tensor values. The best way to examine the contents of a checkpoint is to -load it using a `Saver`. - -Savers can automatically number checkpoint filenames with a provided counter. -This lets you keep multiple checkpoints at different steps while training a -model. For example you can number the checkpoint filenames with the training -step number. To avoid filling up disks, savers manage checkpoint files -automatically. For example, they can keep only the N most recent files, or -one checkpoint for every N hours of training. - -You number checkpoint filenames by passing a value to the optional -`global_step` argument to `save()`: - -```python -saver.save(sess, 'my-model', global_step=0) ==> filename: 'my-model-0' -... -saver.save(sess, 'my-model', global_step=1000) ==> filename: 'my-model-1000' -``` - -Additionally, optional arguments to the `Saver()` constructor let you control -the proliferation of checkpoint files on disk: - -* `max_to_keep` indicates the maximum number of recent checkpoint files to - keep. As new files are created, older files are deleted. If None or 0, - all checkpoint files are kept. Defaults to 5 (that is, the 5 most recent - checkpoint files are kept.) - -* `keep_checkpoint_every_n_hours`: In addition to keeping the most recent - `max_to_keep` checkpoint files, you might want to keep one checkpoint file - for every N hours of training. This can be useful if you want to later - analyze how a model progressed during a long training session. For - example, passing `keep_checkpoint_every_n_hours=2` ensures that you keep - one checkpoint file for every 2 hours of training. The default value of - 10,000 hours effectively disables the feature. - -Note that you still have to call the `save()` method to save the model. -Passing these arguments to the constructor will not save variables -automatically for you. - -A training program that saves regularly looks like: - -```python -... -# Create a saver. -saver = tf.train.Saver(...variables...) -# Launch the graph and train, saving the model every 1,000 steps. -sess = tf.Session() -for step in xrange(1000000): - sess.run(..training_op..) - if step % 1000 == 0: - # Append the step number to the checkpoint name: - saver.save(sess, 'my-model', global_step=step) -``` - -In addition to checkpoint files, savers keep a protocol buffer on disk with -the list of recent checkpoints. This is used to manage numbered checkpoint -files and by `latest_checkpoint()`, which makes it easy to discover the path -to the most recent checkpoint. That protocol buffer is stored in a file named -'checkpoint' next to the checkpoint files. - -If you create several savers, you can specify a different filename for the -protocol buffer file in the call to `save()`. - -- - - - -#### `tf.train.Saver.__init__(var_list=None, reshape=False, sharded=False, max_to_keep=5, keep_checkpoint_every_n_hours=10000.0, name=None, restore_sequentially=False, saver_def=None, builder=None)` {#Saver.__init__} - -Creates a `Saver`. - -The constructor adds ops to save and restore variables. - -`var_list` specifies the variables that will be saved and restored. It can -be passed as a `dict` or a list: - -* A `dict` of names to variables: The keys are the names that will be - used to save or restore the variables in the checkpoint files. -* A list of variables: The variables will be keyed with their op name in - the checkpoint files. - -For example: - -```python -v1 = tf.Variable(..., name='v1') -v2 = tf.Variable(..., name='v2') - -# Pass the variables as a dict: -saver = tf.train.Saver({'v1': v1, 'v2': v2}) - -# Or pass them as a list. -saver = tf.train.Saver([v1, v2]) -# Passing a list is equivalent to passing a dict with the variable op names -# as keys: -saver = tf.train.Saver({v.op.name: v for v in [v1, v2]}) -``` - -The optional `reshape` argument, if `True`, allows restoring a variable from -a save file where the variable had a different shape, but the same number -of elements and type. This is useful if you have reshaped a variable and -want to reload it from an older checkpoint. - -The optional `sharded` argument, if `True`, instructs the saver to shard -checkpoints per device. - -##### Args: - - -* `var_list`: A list of `Variable` objects or a dictionary mapping names to - variables. If `None`, defaults to the list of all variables. -* `reshape`: If `True`, allows restoring parameters from a checkpoint - where the variables have a different shape. -* `sharded`: If `True`, shard the checkpoints, one per device. -* `max_to_keep`: Maximum number of recent checkpoints to keep. - Defaults to 5. -* `keep_checkpoint_every_n_hours`: How often to keep checkpoints. - Defaults to 10,000 hours. -* `name`: String. Optional name to use as a prefix when adding operations. -* `restore_sequentially`: A `Bool`, which if true, causes restore of different - variables to happen sequentially within each device. This can lower - memory usage when restoring very large models. -* `saver_def`: Optional `SaverDef` proto to use instead of running the - builder. This is only useful for specialty code that wants to recreate - a `Saver` object for a previously built `Graph` that had a `Saver`. - The `saver_def` proto should be the one returned by the - `as_saver_def()` call of the `Saver` that was created for that `Graph`. -* `builder`: Optional `SaverBuilder` to use if a `saver_def` was not provided. - Defaults to `BaseSaverBuilder()`. - -##### Raises: - - -* `TypeError`: If `var_list` is invalid. -* `ValueError`: If any of the keys or values in `var_list` are not unique. - - -- - - - -#### `tf.train.Saver.save(sess, save_path, global_step=None, latest_filename=None, meta_graph_suffix='meta', write_meta_graph=True)` {#Saver.save} - -Saves variables. - -This method runs the ops added by the constructor for saving variables. -It requires a session in which the graph was launched. The variables to -save must also have been initialized. - -The method returns the path of the newly created checkpoint file. This -path can be passed directly to a call to `restore()`. - -##### Args: - - -* `sess`: A Session to use to save the variables. -* `save_path`: String. Path to the checkpoint filename. If the saver is - `sharded`, this is the prefix of the sharded checkpoint filename. -* `global_step`: If provided the global step number is appended to - `save_path` to create the checkpoint filename. The optional argument - can be a `Tensor`, a `Tensor` name or an integer. -* `latest_filename`: Optional name for the protocol buffer file that will - contains the list of most recent checkpoint filenames. That file, - kept in the same directory as the checkpoint files, is automatically - managed by the saver to keep track of recent checkpoints. Defaults to - 'checkpoint'. -* `meta_graph_suffix`: Suffix for `MetaGraphDef` file. Defaults to 'meta'. -* `write_meta_graph`: `Boolean` indicating whether or not to write the meta - graph file. - -##### Returns: - - A string: path at which the variables were saved. If the saver is - sharded, this string ends with: '-?????-of-nnnnn' where 'nnnnn' - is the number of shards created. - -##### Raises: - - -* `TypeError`: If `sess` is not a `Session`. -* `ValueError`: If `latest_filename` contains path components. - - -- - - - -#### `tf.train.Saver.restore(sess, save_path)` {#Saver.restore} - -Restores previously saved variables. - -This method runs the ops added by the constructor for restoring variables. -It requires a session in which the graph was launched. The variables to -restore do not have to have been initialized, as restoring is itself a way -to initialize variables. - -The `save_path` argument is typically a value previously returned from a -`save()` call, or a call to `latest_checkpoint()`. - -##### Args: - - -* `sess`: A `Session` to use to restore the parameters. -* `save_path`: Path where parameters were previously saved. - -##### Raises: - - -* `ValueError`: If the given `save_path` does not point to a file. - - - -Other utility methods. - -- - - - -#### `tf.train.Saver.last_checkpoints` {#Saver.last_checkpoints} - -List of not-yet-deleted checkpoint filenames. - -You can pass any of the returned values to `restore()`. - -##### Returns: - - A list of checkpoint filenames, sorted from oldest to newest. - - -- - - - -#### `tf.train.Saver.set_last_checkpoints(last_checkpoints)` {#Saver.set_last_checkpoints} - -DEPRECATED: Use set_last_checkpoints_with_time. - -Sets the list of old checkpoint filenames. - -##### Args: - - -* `last_checkpoints`: A list of checkpoint filenames. - -##### Raises: - - -* `AssertionError`: If last_checkpoints is not a list. - - -- - - - -#### `tf.train.Saver.as_saver_def()` {#Saver.as_saver_def} - -Generates a `SaverDef` representation of this saver. - -##### Returns: - - A `SaverDef` proto. - - - -#### Other Methods -- - - - -#### `tf.train.Saver.export_meta_graph(filename=None, collection_list=None, as_text=False)` {#Saver.export_meta_graph} - -Writes `MetaGraphDef` to save_path/filename. - -##### Args: - - -* `filename`: Optional meta_graph filename including the path. -* `collection_list`: List of string keys to collect. -* `as_text`: If `True`, writes the meta_graph as an ASCII proto. - -##### Returns: - - A `MetaGraphDef` proto. - - -- - - - -#### `tf.train.Saver.from_proto(saver_def)` {#Saver.from_proto} - -Returns a `Saver` object created from `saver_def`. - - -- - - - -#### `tf.train.Saver.set_last_checkpoints_with_time(last_checkpoints_with_time)` {#Saver.set_last_checkpoints_with_time} - -Sets the list of old checkpoint filenames and timestamps. - -##### Args: - - -* `last_checkpoints_with_time`: A list of tuples of checkpoint filenames and - timestamps. - -##### Raises: - - -* `AssertionError`: If last_checkpoints_with_time is not a list. - - -- - - - -#### `tf.train.Saver.to_proto()` {#Saver.to_proto} - -Converts this `Saver` to a `SaverDef` protocol buffer. - -##### Returns: - - A `SaverDef` protocol buffer. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.Server.create_local_server.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.Server.create_local_server.md new file mode 100644 index 0000000000..f349dc0748 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.Server.create_local_server.md @@ -0,0 +1,19 @@ +#### `tf.train.Server.create_local_server(start=True)` {#Server.create_local_server} + +Creates a new single-process cluster running on the local host. + +This method is a convenience wrapper for creating a +`tf.train.Server` with a `tf.train.ServerDef` that specifies a +single-process cluster containing a single task in a job called +`"local"`. + +##### Args: + + +* `start`: (Optional.) Boolean, indicating whether to start the server after + creating it. Defaults to `True`. + +##### Returns: + + A local `tf.train.Server`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.exponential_decay.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.exponential_decay.md new file mode 100644 index 0000000000..2b8e72a0a2 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.exponential_decay.md @@ -0,0 +1,54 @@ +### `tf.train.exponential_decay(learning_rate, global_step, decay_steps, decay_rate, staircase=False, name=None)` {#exponential_decay} + +Applies exponential decay to the learning rate. + +When training a model, it is often recommended to lower the learning rate as +the training progresses. This function applies an exponential decay function +to a provided initial learning rate. It requires a `global_step` value to +compute the decayed learning rate. You can just pass a TensorFlow variable +that you increment at each training step. + +The function returns the decayed learning rate. It is computed as: + +```python +decayed_learning_rate = learning_rate * + decay_rate ^ (global_step / decay_steps) +``` + +If the argument `staircase` is `True`, then `global_step /decay_steps` is an +integer division and the decayed learning rate follows a staircase function. + +Example: decay every 100000 steps with a base of 0.96: + +```python +... +global_step = tf.Variable(0, trainable=False) +starter_learning_rate = 0.1 +learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step, + 100000, 0.96, staircase=True) +# Passing global_step to minimize() will increment it at each step. +learning_step = ( + tf.GradientDescentOptimizer(learning_rate) + .minimize(...my loss..., global_step=global_step) +) +``` + +##### Args: + + +* `learning_rate`: A scalar `float32` or `float64` `Tensor` or a + Python number. The initial learning rate. +* `global_step`: A scalar `int32` or `int64` `Tensor` or a Python number. + Global step to use for the decay computation. Must not be negative. +* `decay_steps`: A scalar `int32` or `int64` `Tensor` or a Python number. + Must be positive. See the decay computation above. +* `decay_rate`: A scalar `float32` or `float64` `Tensor` or a + Python number. The decay rate. +* `staircase`: Boolean. It `True` decay the learning rate at discrete intervals. +* `name`: String. Optional name of the operation. Defaults to 'ExponentialDecay' + +##### Returns: + + A scalar `Tensor` of the same type as `learning_rate`. The decayed + learning rate. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.get_checkpoint_state.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.get_checkpoint_state.md new file mode 100644 index 0000000000..e2852b2314 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.get_checkpoint_state.md @@ -0,0 +1,19 @@ +### `tf.train.get_checkpoint_state(checkpoint_dir, latest_filename=None)` {#get_checkpoint_state} + +Returns CheckpointState proto from the "checkpoint" file. + +If the "checkpoint" file contains a valid CheckpointState +proto, returns it. + +##### Args: + + +* `checkpoint_dir`: The directory of checkpoints. +* `latest_filename`: Optional name of the checkpoint file. Default to + 'checkpoint'. + +##### Returns: + + A CheckpointState if the state was available, None + otherwise. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.range_input_producer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.range_input_producer.md new file mode 100644 index 0000000000..fa73440d88 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.range_input_producer.md @@ -0,0 +1,25 @@ +### `tf.train.range_input_producer(limit, num_epochs=None, shuffle=True, seed=None, capacity=32, shared_name=None, name=None)` {#range_input_producer} + +Produces the integers from 0 to limit-1 in a queue. + +##### Args: + + +* `limit`: An int32 scalar tensor. +* `num_epochs`: An integer (optional). If specified, `range_input_producer` + produces each integer `num_epochs` times before generating an + OutOfRange error. If not specified, `range_input_producer` can cycle + through the integers an unlimited number of times. +* `shuffle`: Boolean. If true, the integers are randomly shuffled within each + epoch. +* `seed`: An integer (optional). Seed used if shuffle == True. +* `capacity`: An integer. Sets the queue capacity. +* `shared_name`: (optional). If set, this queue will be shared under the given + name across multiple sessions. +* `name`: A name for the operations (optional). + +##### Returns: + + A Queue with the output integers. A `QueueRunner` for the Queue + is added to the current `Graph`'s `QUEUE_RUNNER` collection. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.write_graph.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.write_graph.md new file mode 100644 index 0000000000..eea9025321 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.write_graph.md @@ -0,0 +1,21 @@ +### `tf.train.write_graph(graph_def, logdir, name, as_text=True)` {#write_graph} + +Writes a graph proto to a file. + +The graph is written as a binary proto unless `as_text` is `True`. + +```python +v = tf.Variable(0, name='my_variable') +sess = tf.Session() +tf.train.write_graph(sess.graph_def, '/tmp/my-model', 'train.pbtxt') +``` + +##### Args: + + +* `graph_def`: A `GraphDef` protocol buffer. +* `logdir`: Directory where to write the graph. This can refer to remote + filesystems, such as Google Cloud Storage (GCS). +* `name`: Filename for the graph. +* `as_text`: If `True`, writes the graph as an ASCII proto. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.tuple.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.tuple.md new file mode 100644 index 0000000000..503a98d625 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.tuple.md @@ -0,0 +1,36 @@ +### `tf.tuple(tensors, name=None, control_inputs=None)` {#tuple} + +Group tensors together. + +This creates a tuple of tensors with the same values as the `tensors` +argument, except that the value of each tensor is only returned after the +values of all tensors have been computed. + +`control_inputs` contains additional ops that have to finish before this op +finishes, but whose outputs are not returned. + +This can be used as a "join" mechanism for parallel computations: all the +argument tensors can be computed in parallel, but the values of any tensor +returned by `tuple` are only available after all the parallel computations +are done. + +See also `group` and `with_dependencies`. + +##### Args: + + +* `tensors`: A list of `Tensor`s or `IndexedSlices`, some entries can be `None`. +* `name`: (optional) A name to use as a `name_scope` for the operation. +* `control_inputs`: List of additional ops to finish before returning. + +##### Returns: + + Same as `tensors`. + +##### Raises: + + +* `ValueError`: If `tensors` does not contain any `Tensor` or `IndexedSlices`. +* `TypeError`: If `control_inputs` is not a list of `Operation` or `Tensor` + objects. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.unsorted_segment_sum.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.unsorted_segment_sum.md deleted file mode 100644 index 63255ce815..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.unsorted_segment_sum.md +++ /dev/null @@ -1,38 +0,0 @@ -### `tf.unsorted_segment_sum(data, segment_ids, num_segments, name=None)` {#unsorted_segment_sum} - -Computes the sum along segments of a tensor. - -Read [the section on -Segmentation](../../api_docs/python/math_ops.md#segmentation) for an explanation -of segments. - -Computes a tensor such that -\\(output_i = \sum_j data_j\\) where sum is over `j` such -that `segment_ids[j] == i`. Unlike `SegmentSum`, `segment_ids` -need not be sorted and need not cover all values in the full - range of valid values. - -If the sum is empty for a given segment ID `i`, `output[i] = 0`. - -`num_segments` should equal the number of distinct segment IDs. - -
- -
- -##### Args: - - -* `data`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`. -* `segment_ids`: A `Tensor`. Must be one of the following types: `int32`, `int64`. - A 1-D tensor whose rank is equal to the rank of `data`'s - first dimension. -* `num_segments`: A `Tensor` of type `int32`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `data`. - Has same shape as data, except for dimension 0 which - has size `num_segments`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.while_loop.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.while_loop.md deleted file mode 100644 index 4baea56c63..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.while_loop.md +++ /dev/null @@ -1,60 +0,0 @@ -### `tf.while_loop(cond, body, loop_vars, parallel_iterations=10, back_prop=True, swap_memory=False, name=None)` {#while_loop} - -Repeat `body` while the condition `cond` is true. - -`cond` is a callable returning a boolean scalar tensor. `body` is a callable -returning a list of tensors of the same length and with the same types as -`loop_vars`. `loop_vars` is a list of tensors that is passed to both `cond` -and `body`. `cond` and `body` both take as many arguments as there are -`loop_vars`. - -In addition to regular Tensors or IndexedSlices, the body may accept and -return TensorArray objects. The flows of the TensorArray objects will -be appropriately forwarded between loops and during gradient calculations. - -While `cond` evaluates to true, `body` is executed. - -`while_loop` implements non-strict semantics, enabling multiple iterations -to run in parallel. The maximum number of parallel iterations can be -controlled by `parallel_iterations`, which gives users some control over -memory consumption and execution order. For correct programs, `while_loop` -should return the same result for any parallel_iterations > 0. - -For training, TensorFlow remembers the tensors that are produced in the -forward inference but needed in back propagation. These tensors can be a -main source of memory consumption and often cause OOM problems when training -on GPUs. When the flag swap_memory is true, we swap out these tensors from -GPU to CPU. This for example allows us to train RNN models with very long -sequences and large batches. - -##### Args: - - -* `cond`: A callable that represents the termination condition of the loop. -* `body`: A callable that represents the loop body. -* `loop_vars`: The list of variable input tensors. -* `parallel_iterations`: The number of iterations allowed to run in parallel. -* `back_prop`: Whether backprop is enabled for this while loop. -* `swap_memory`: Whether GPU-CPU memory swap is enabled for this loop. -* `name`: Optional name prefix for the returned tensors. - -##### Returns: - - The output tensors for the loop variables after the loop. - -##### Raises: - - -* `TypeError`: if `cond` or `body` is not callable. -* `ValueError`: if `loop_var` is empty. - - -* `Example`: - - ```python - i = tf.constant(0) - c = lambda i: tf.less(i, 10) - b = lambda i: tf.add(i, 1) - r = tf.while_loop(c, b, [i]) - ``` - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.zeros_initializer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.zeros_initializer.md new file mode 100644 index 0000000000..707393f8be --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.zeros_initializer.md @@ -0,0 +1,4 @@ +### `tf.zeros_initializer(shape, dtype=tf.float32)` {#zeros_initializer} + +An adaptor for zeros() to match the Initializer spec. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.DType.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.DType.md deleted file mode 100644 index 4c77a143e0..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.DType.md +++ /dev/null @@ -1,206 +0,0 @@ -Represents the type of the elements in a `Tensor`. - -The following `DType` objects are defined: - -* `tf.float16`: 16-bit half-precision floating-point. -* `tf.float32`: 32-bit single-precision floating-point. -* `tf.float64`: 64-bit double-precision floating-point. -* `tf.bfloat16`: 16-bit truncated floating-point. -* `tf.complex64`: 64-bit single-precision complex. -* `tf.complex128`: 128-bit double-precision complex. - -* `tf.int8`: 8-bit signed integer. -* `tf.uint8`: 8-bit unsigned integer. -* `tf.uint16`: 16-bit unsigned integer. -* `tf.int16`: 16-bit signed integer. -* `tf.int32`: 32-bit signed integer. -* `tf.int64`: 64-bit signed integer. - -* `tf.bool`: Boolean. - -* `tf.string`: String. - -* `tf.qint8`: Quantized 8-bit signed integer. -* `tf.quint8`: Quantized 8-bit unsigned integer. -* `tf.qint16`: Quantized 16-bit signed integer. -* `tf.quint16`: Quantized 16-bit unsigned integer. -* `tf.qint32`: Quantized 32-bit signed integer. - -In addition, variants of these types with the `_ref` suffix are -defined for reference-typed tensors. - -The `tf.as_dtype()` function converts numpy types and string type -names to a `DType` object. - -- - - - -#### `tf.DType.is_compatible_with(other)` {#DType.is_compatible_with} - -Returns True if the `other` DType will be converted to this DType. - -The conversion rules are as follows: - -``` -DType(T) .is_compatible_with(DType(T)) == True -DType(T) .is_compatible_with(DType(T).as_ref) == True -DType(T).as_ref.is_compatible_with(DType(T)) == False -DType(T).as_ref.is_compatible_with(DType(T).as_ref) == True -``` - -##### Args: - - -* `other`: A `DType` (or object that may be converted to a `DType`). - -##### Returns: - - True if a Tensor of the `other` `DType` will be implicitly converted to - this `DType`. - - -- - - - -#### `tf.DType.name` {#DType.name} - -Returns the string name for this `DType`. - - -- - - - -#### `tf.DType.base_dtype` {#DType.base_dtype} - -Returns a non-reference `DType` based on this `DType`. - - -- - - - -#### `tf.DType.real_dtype` {#DType.real_dtype} - -Returns the dtype correspond to this dtype's real part. - - -- - - - -#### `tf.DType.is_ref_dtype` {#DType.is_ref_dtype} - -Returns `True` if this `DType` represents a reference type. - - -- - - - -#### `tf.DType.as_ref` {#DType.as_ref} - -Returns a reference `DType` based on this `DType`. - - -- - - - -#### `tf.DType.is_floating` {#DType.is_floating} - -Returns whether this is a (real) floating point type. - - -- - - - -#### `tf.DType.is_complex` {#DType.is_complex} - -Returns whether this is a complex floating point type. - - -- - - - -#### `tf.DType.is_integer` {#DType.is_integer} - -Returns whether this is a (non-quantized) integer type. - - -- - - - -#### `tf.DType.is_quantized` {#DType.is_quantized} - -Returns whether this is a quantized data type. - - -- - - - -#### `tf.DType.is_unsigned` {#DType.is_unsigned} - -Returns whether this type is unsigned. - -Non-numeric, unordered, and quantized types are not considered unsigned, and -this function returns `False`. - -##### Returns: - - Whether a `DType` is unsigned. - - - -- - - - -#### `tf.DType.as_numpy_dtype` {#DType.as_numpy_dtype} - -Returns a `numpy.dtype` based on this `DType`. - - -- - - - -#### `tf.DType.as_datatype_enum` {#DType.as_datatype_enum} - -Returns a `types_pb2.DataType` enum value based on this `DType`. - - - -#### Other Methods -- - - - -#### `tf.DType.__init__(type_enum)` {#DType.__init__} - -Creates a new `DataType`. - -NOTE(mrry): In normal circumstances, you should not need to -construct a `DataType` object directly. Instead, use the -`tf.as_dtype()` function. - -##### Args: - - -* `type_enum`: A `types_pb2.DataType` enum value. - -##### Raises: - - -* `TypeError`: If `type_enum` is not a value `types_pb2.DataType`. - - -- - - - -#### `tf.DType.max` {#DType.max} - -Returns the maximum representable value in this data type. - -##### Raises: - - -* `TypeError`: if this is a non-numeric, unordered, or quantized type. - - -- - - - -#### `tf.DType.min` {#DType.min} - -Returns the minimum representable value in this data type. - -##### Raises: - - -* `TypeError`: if this is a non-numeric, unordered, or quantized type. - - -- - - - -#### `tf.DType.size` {#DType.size} - - - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.IndexedSlices.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.IndexedSlices.md deleted file mode 100644 index 435a178205..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.IndexedSlices.md +++ /dev/null @@ -1,93 +0,0 @@ -A sparse representation of a set of tensor slices at given indices. - -This class is a simple wrapper for a pair of `Tensor` objects: - -* `values`: A `Tensor` of any dtype with shape `[D0, D1, ..., Dn]`. -* `indices`: A 1-D integer `Tensor` with shape `[D0]`. - -An `IndexedSlices` is typically used to represent a subset of a larger -tensor `dense` of shape `[LARGE0, D1, .. , DN]` where `LARGE0 >> D0`. -The values in `indices` are the indices in the first dimension of -the slices that have been extracted from the larger tensor. - -The dense tensor `dense` represented by an `IndexedSlices` `slices` has - -```python -dense[slices.indices[i], :, :, :, ...] = slices.values[i, :, :, :, ...] -``` - -The `IndexedSlices` class is used principally in the definition of -gradients for operations that have sparse gradients -(e.g. [`tf.gather`](../../api_docs/python/array_ops.md#gather)). - -Contrast this representation with -[`SparseTensor`](../../api_docs/python/sparse_ops.md#SparseTensor), -which uses multi-dimensional indices and scalar values. - -- - - - -#### `tf.IndexedSlices.__init__(values, indices, dense_shape=None)` {#IndexedSlices.__init__} - -Creates an `IndexedSlices`. - - - -- - - - -#### `tf.IndexedSlices.values` {#IndexedSlices.values} - -A `Tensor` containing the values of the slices. - - -- - - - -#### `tf.IndexedSlices.indices` {#IndexedSlices.indices} - -A 1-D `Tensor` containing the indices of the slices. - - -- - - - -#### `tf.IndexedSlices.dense_shape` {#IndexedSlices.dense_shape} - -A 1-D `Tensor` containing the shape of the corresponding dense tensor. - - - -- - - - -#### `tf.IndexedSlices.name` {#IndexedSlices.name} - -The name of this `IndexedSlices`. - - -- - - - -#### `tf.IndexedSlices.dtype` {#IndexedSlices.dtype} - -The `DType` of elements in this tensor. - - -- - - - -#### `tf.IndexedSlices.device` {#IndexedSlices.device} - -The name of the device on which `values` will be produced, or `None`. - - -- - - - -#### `tf.IndexedSlices.op` {#IndexedSlices.op} - -The `Operation` that produces `values` as an output. - - - -#### Other Methods -- - - - -#### `tf.IndexedSlices.graph` {#IndexedSlices.graph} - -The `Graph` that contains the values, indices, and shape tensors. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.VariableScope.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.VariableScope.md deleted file mode 100644 index 04fdca9bdf..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.VariableScope.md +++ /dev/null @@ -1,105 +0,0 @@ -Variable scope object to carry defaults to provide to get_variable. - -Many of the arguments we need for get_variable in a variable store are most -easily handled with a context. This object is used for the defaults. - -Attributes: - name: name of the current scope, used as prefix in get_variable. - initializer: default initializer passed to get_variable. - regularizer: default regularizer passed to get_variable. - reuse: Boolean or None, setting the reuse in get_variable. - caching_device: string, callable, or None: the caching device passed to - get_variable. - partitioner: callable or `None`: the partitioner passed to `get_variable`. - name_scope: The name passed to `tf.name_scope`. -- - - - -#### `tf.VariableScope.__init__(reuse, name='', initializer=None, regularizer=None, caching_device=None, partitioner=None, name_scope='')` {#VariableScope.__init__} - -Creates a new VariableScope with the given properties. - - -- - - - -#### `tf.VariableScope.caching_device` {#VariableScope.caching_device} - - - - -- - - - -#### `tf.VariableScope.get_variable(var_store, name, shape=None, dtype=tf.float32, initializer=None, regularizer=None, trainable=True, collections=None, caching_device=None, partitioner=None, validate_shape=True)` {#VariableScope.get_variable} - -Gets an existing variable with this name or create a new one. - - -- - - - -#### `tf.VariableScope.initializer` {#VariableScope.initializer} - - - - -- - - - -#### `tf.VariableScope.name` {#VariableScope.name} - - - - -- - - - -#### `tf.VariableScope.partitioner` {#VariableScope.partitioner} - - - - -- - - - -#### `tf.VariableScope.regularizer` {#VariableScope.regularizer} - - - - -- - - - -#### `tf.VariableScope.reuse` {#VariableScope.reuse} - - - - -- - - - -#### `tf.VariableScope.reuse_variables()` {#VariableScope.reuse_variables} - -Reuse variables in this scope. - - -- - - - -#### `tf.VariableScope.set_caching_device(caching_device)` {#VariableScope.set_caching_device} - -Set caching_device for this scope. - - -- - - - -#### `tf.VariableScope.set_initializer(initializer)` {#VariableScope.set_initializer} - -Set initializer for this scope. - - -- - - - -#### `tf.VariableScope.set_partitioner(partitioner)` {#VariableScope.set_partitioner} - -Set partitioner for this scope. - - -- - - - -#### `tf.VariableScope.set_regularizer(regularizer)` {#VariableScope.set_regularizer} - -Set regularizer for this scope. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.add_check_numerics_ops.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.add_check_numerics_ops.md new file mode 100644 index 0000000000..9e72af79db --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.add_check_numerics_ops.md @@ -0,0 +1,13 @@ +### `tf.add_check_numerics_ops()` {#add_check_numerics_ops} + +Connect a `check_numerics` to every floating point tensor. + +`check_numerics` operations themselves are added for each `float` or `double` +tensor in the graph. For all ops in the graph, the `check_numerics` op for +all of its (`float` or `double`) inputs is guaranteed to run before the +`check_numerics` op on any of its outputs. + +##### Returns: + + A `group` op depending on all `check_numerics` ops added. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.assert_variables_initialized.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.assert_variables_initialized.md deleted file mode 100644 index ef61848aa8..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.assert_variables_initialized.md +++ /dev/null @@ -1,24 +0,0 @@ -### `tf.assert_variables_initialized(var_list=None)` {#assert_variables_initialized} - -Returns an Op to check if variables are initialized. - -NOTE: This function is obsolete and will be removed in 6 months. Please -change your implementation to use `report_uninitialized_variables()`. - -When run, the returned Op will raise the exception `FailedPreconditionError` -if any of the variables has not yet been initialized. - -Note: This function is implemented by trying to fetch the values of the -variables. If one of the variables is not initialized a message may be -logged by the C++ runtime. This is expected. - -##### Args: - - -* `var_list`: List of `Variable` objects to check. Defaults to the - value of `all_variables().` - -##### Returns: - - An Op, or None if there are no variables. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.batch_fft3d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.batch_fft3d.md deleted file mode 100644 index 10c2ea3bf6..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.batch_fft3d.md +++ /dev/null @@ -1,18 +0,0 @@ -### `tf.batch_fft3d(input, name=None)` {#batch_fft3d} - -Compute the 3-dimensional discrete Fourier Transform over the inner-most 3 - -dimensions of `input`. - -##### Args: - - -* `input`: A `Tensor` of type `complex64`. A complex64 tensor. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `complex64`. - A complex64 tensor of the same shape as `input`. The inner-most 3 - dimensions of `input` are replaced with their 3D Fourier Transform. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.cast.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.cast.md deleted file mode 100644 index 9571f87afe..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.cast.md +++ /dev/null @@ -1,30 +0,0 @@ -### `tf.cast(x, dtype, name=None)` {#cast} - -Casts a tensor to a new type. - -The operation casts `x` (in case of `Tensor`) or `x.values` -(in case of `SparseTensor`) to `dtype`. - -For example: - -```python -# tensor `a` is [1.8, 2.2], dtype=tf.float -tf.cast(a, tf.int32) ==> [1, 2] # dtype=tf.int32 -``` - -##### Args: - - -* `x`: A `Tensor` or `SparseTensor`. -* `dtype`: The destination type. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` or `SparseTensor` with same shape as `x`. - -##### Raises: - - -* `TypeError`: If `x` cannot be cast to the `dtype`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.cholesky.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.cholesky.md new file mode 100644 index 0000000000..4032b80d8e --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.cholesky.md @@ -0,0 +1,22 @@ +### `tf.cholesky(input, name=None)` {#cholesky} + +Calculates the Cholesky decomposition of a square matrix. + +The input has to be symmetric and positive definite. Only the lower-triangular +part of the input will be used for this operation. The upper-triangular part +will not be read. + +The result is the lower-triangular matrix of the Cholesky decomposition of the +input, `L`, so that `input = L L^*`. + +##### Args: + + +* `input`: A `Tensor`. Must be one of the following types: `float64`, `float32`. + Shape is `[M, M]`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `input`. Shape is `[M, M]`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.distributions.Uniform.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.distributions.Uniform.md deleted file mode 100644 index ad6008c9f6..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.distributions.Uniform.md +++ /dev/null @@ -1,216 +0,0 @@ -Uniform distribution with `a` and `b` parameters. - -The PDF of this distribution is constant between [`a`, `b`], and 0 elsewhere. -- - - - -#### `tf.contrib.distributions.Uniform.__init__(a=0.0, b=1.0, name='Uniform')` {#Uniform.__init__} - -Construct Uniform distributions with `a` and `b`. - -The parameters `a` and `b` must be shaped in a way that supports -broadcasting (e.g. `b - a` is a valid operation). - -Here are examples without broadcasting: - -```python -# Without broadcasting -u1 = Uniform(3.0, 4.0) # a single uniform distribution [3, 4] -u2 = Uniform([1.0, 2.0], [3.0, 4.0]) # 2 distributions [1, 3], [2, 4] -u3 = Uniform([[1.0, 2.0], - [3.0, 4.0]], - [[1.5, 2.5], - [3.5, 4.5]]) # 4 distributions -``` - -And with broadcasting: - -```python -u1 = Uniform(3.0, [5.0, 6.0, 7.0]) # 3 distributions -``` - -##### Args: - - -* `a`: `float` or `double` tensor, the minimum endpoint. -* `b`: `float` or `double` tensor, the maximum endpoint. Must be > `a`. -* `name`: The name to prefix Ops created by this distribution class. - -##### Raises: - - -* `InvalidArgumentError`: if `a >= b`. - - -- - - - -#### `tf.contrib.distributions.Uniform.a` {#Uniform.a} - - - - -- - - - -#### `tf.contrib.distributions.Uniform.b` {#Uniform.b} - - - - -- - - - -#### `tf.contrib.distributions.Uniform.batch_shape(name='batch_shape')` {#Uniform.batch_shape} - - - - -- - - - -#### `tf.contrib.distributions.Uniform.cdf(x, name='cdf')` {#Uniform.cdf} - -CDF of observations in `x` under these Uniform distribution(s). - -##### Args: - - -* `x`: tensor of dtype `dtype`, must be broadcastable with `a` and `b`. -* `name`: The name to give this op. - -##### Returns: - - -* `cdf`: tensor of dtype `dtype`, the CDFs of `x`. If `x` is `nan`, will - return `nan`. - - -- - - - -#### `tf.contrib.distributions.Uniform.dtype` {#Uniform.dtype} - - - - -- - - - -#### `tf.contrib.distributions.Uniform.entropy(name='entropy')` {#Uniform.entropy} - -The entropy of Uniform distribution(s). - -##### Args: - - -* `name`: The name to give this op. - -##### Returns: - - -* `entropy`: tensor of dtype `dtype`, the entropy. - - -- - - - -#### `tf.contrib.distributions.Uniform.event_shape(name='event_shape')` {#Uniform.event_shape} - - - - -- - - - -#### `tf.contrib.distributions.Uniform.get_batch_shape()` {#Uniform.get_batch_shape} - - - - -- - - - -#### `tf.contrib.distributions.Uniform.get_event_shape()` {#Uniform.get_event_shape} - - - - -- - - - -#### `tf.contrib.distributions.Uniform.is_reparameterized` {#Uniform.is_reparameterized} - - - - -- - - - -#### `tf.contrib.distributions.Uniform.log_cdf(x, name='log_cdf')` {#Uniform.log_cdf} - - - - -- - - - -#### `tf.contrib.distributions.Uniform.log_pdf(x, name='log_pdf')` {#Uniform.log_pdf} - - - - -- - - - -#### `tf.contrib.distributions.Uniform.mean` {#Uniform.mean} - - - - -- - - - -#### `tf.contrib.distributions.Uniform.name` {#Uniform.name} - - - - -- - - - -#### `tf.contrib.distributions.Uniform.pdf(x, name='pdf')` {#Uniform.pdf} - -The PDF of observations in `x` under these Uniform distribution(s). - -##### Args: - - -* `x`: tensor of dtype `dtype`, must be broadcastable with `a` and `b`. -* `name`: The name to give this op. - -##### Returns: - - -* `pdf`: tensor of dtype `dtype`, the pdf values of `x`. If `x` is `nan`, will - return `nan`. - - -- - - - -#### `tf.contrib.distributions.Uniform.range` {#Uniform.range} - -`b - a`. - - -- - - - -#### `tf.contrib.distributions.Uniform.sample(n, seed=None, name='sample')` {#Uniform.sample} - -Sample `n` observations from the Uniform Distributions. - -##### Args: - - -* `n`: `Scalar`, type int32, the number of observations to sample. -* `seed`: Python integer, the random seed. -* `name`: The name to give this op. - -##### Returns: - - -* `samples`: a `Tensor` of shape `(n,) + self.batch_shape + self.event_shape` - with values of type `self.dtype`. - - -- - - - -#### `tf.contrib.distributions.Uniform.variance` {#Uniform.variance} - - - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.distributions.normal_congugates_known_sigma_predictive.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.distributions.normal_congugates_known_sigma_predictive.md new file mode 100644 index 0000000000..89e4e5ca3c --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.distributions.normal_congugates_known_sigma_predictive.md @@ -0,0 +1,55 @@ +### `tf.contrib.distributions.normal_congugates_known_sigma_predictive(prior, sigma, s, n)` {#normal_congugates_known_sigma_predictive} + +Posterior predictive Normal distribution w. conjugate prior on the mean. + +This model assumes that `n` observations (with sum `s`) come from a +Normal with unknown mean `mu` (described by the Normal `prior`) +and known variance `sigma^2`. The "known sigma predictive" +is the distribution of new observations, conditioned on the existing +observations and our prior. + +Accepts a prior Normal distribution object, having parameters +`mu0` and `sigma0`, as well as known `sigma` values of the predictive +distribution(s) (also assumed Normal), +and statistical estimates `s` (the sum(s) of the observations) and +`n` (the number(s) of observations). + +Calculates the Normal distribution(s) `p(x | sigma^2)`: + +``` + p(x | sigma^2) = int N(x | mu, sigma^2) N(mu | prior.mu, prior.sigma^2) dmu + = N(x | prior.mu, 1/(sigma^2 + prior.sigma^2)) +``` + +Returns the predictive posterior distribution object, with parameters +`(mu', sigma'^2)`, where: + +``` +sigma_n^2 = 1/(1/sigma0^2 + n/sigma^2), +mu' = (mu0/sigma0^2 + s/sigma^2) * sigma_n^2. +sigma'^2 = sigma_n^2 + sigma^2, +``` + +Distribution parameters from `prior`, as well as `sigma`, `s`, and `n`. +will broadcast in the case of multidimensional sets of parameters. + +##### Args: + + +* `prior`: `Normal` object of type `dtype`: + the prior distribution having parameters `(mu0, sigma0)`. +* `sigma`: tensor of type `dtype`, taking values `sigma > 0`. + The known stddev parameter(s). +* `s`: Tensor of type `dtype`. The sum(s) of observations. +* `n`: Tensor of type `int`. The number(s) of observations. + +##### Returns: + + A new Normal predictive distribution object. + +##### Raises: + + +* `TypeError`: if dtype of `s` does not match `dtype`, or `prior` is not a + Normal object. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.apply_regularization.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.apply_regularization.md deleted file mode 100644 index 8216a4fa25..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.apply_regularization.md +++ /dev/null @@ -1,27 +0,0 @@ -### `tf.contrib.layers.apply_regularization(regularizer, weights_list=None)` {#apply_regularization} - -Returns the summed penalty by applying `regularizer` to the `weights_list`. - -Adding a regularization penalty over the layer weights and embedding weights -can help prevent overfitting the training data. Regularization over layer -biases is less common/useful, but assuming proper data preprocessing/mean -subtraction, it usually shouldn't hurt much either. - -##### Args: - - -* `regularizer`: A function that takes a single `Tensor` argument and returns - a scalar `Tensor` output. -* `weights_list`: List of weights `Tensors` or `Variables` to apply - `regularizer` over. Defaults to the `GraphKeys.WEIGHTS` collection if - `None`. - -##### Returns: - - A scalar representing the overall regularization penalty. - -##### Raises: - - -* `ValueError`: If `regularizer` does not return a scalar output. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.l1_regularizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.l1_regularizer.md new file mode 100644 index 0000000000..1aa8074980 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.l1_regularizer.md @@ -0,0 +1,22 @@ +### `tf.contrib.layers.l1_regularizer(scale)` {#l1_regularizer} + +Returns a function that can be used to apply L1 regularization to weights. + +L1 regularization encourages sparsity. + +##### Args: + + +* `scale`: A scalar multiplier `Tensor`. 0.0 disables the regularizer. + +##### Returns: + + A function with signature `l1(weights, name=None)` that apply L1 + regularization. + +##### Raises: + + +* `ValueError`: If scale is outside of the range [0.0, 1.0] or if scale is not a + float. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.sum_regularizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.sum_regularizer.md deleted file mode 100644 index ee05583b04..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.sum_regularizer.md +++ /dev/null @@ -1,14 +0,0 @@ -### `tf.contrib.layers.sum_regularizer(regularizer_list)` {#sum_regularizer} - -Returns a function that applies the sum of multiple regularizers. - -##### Args: - - -* `regularizer_list`: A list of regularizers to apply. - -##### Returns: - - A function with signature `sum_reg(weights, name=None)` that applies the - sum of all the input regularizers. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.summarize_activation.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.summarize_activation.md new file mode 100644 index 0000000000..3aed0ff43c --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.summarize_activation.md @@ -0,0 +1,16 @@ +### `tf.contrib.layers.summarize_activation(op)` {#summarize_activation} + +Summarize an activation. + +This applies the given activation and adds useful summaries specific to the +activation. + +##### Args: + + +* `op`: The tensor to summarize (assumed to be a layer activation). + +##### Returns: + + The summary op created to summarize `op`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.summarize_collection.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.summarize_collection.md new file mode 100644 index 0000000000..b1b5f56056 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.summarize_collection.md @@ -0,0 +1,4 @@ +### `tf.contrib.layers.summarize_collection(collection, name_filter=None, summarizer=summarize_tensor)` {#summarize_collection} + +Summarize a graph collection of tensors, possibly filtered by name. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.summarize_tensors.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.summarize_tensors.md deleted file mode 100644 index 608999b437..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.summarize_tensors.md +++ /dev/null @@ -1,4 +0,0 @@ -### `tf.contrib.layers.summarize_tensors(tensors, summarizer=summarize_tensor)` {#summarize_tensors} - -Summarize a set of tensors. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.TensorFlowEstimator.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.TensorFlowEstimator.md deleted file mode 100644 index c3270290b9..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.TensorFlowEstimator.md +++ /dev/null @@ -1,295 +0,0 @@ -Base class for all TensorFlow estimators. - -Parameters: - model_fn: Model function, that takes input X, y tensors and outputs - prediction and loss tensors. - n_classes: Number of classes in the target. - batch_size: Mini batch size. - steps: Number of steps to run over data. - optimizer: Optimizer name (or class), for example "SGD", "Adam", - "Adagrad". - learning_rate: If this is constant float value, no decay function is used. - Instead, a customized decay function can be passed that accepts - global_step as parameter and returns a Tensor. - e.g. exponential decay function: - def exp_decay(global_step): - return tf.train.exponential_decay( - learning_rate=0.1, global_step, - decay_steps=2, decay_rate=0.001) - clip_gradients: Clip norm of the gradients to this value to stop - gradient explosion. - class_weight: None or list of n_classes floats. Weight associated with - classes for loss computation. If not given, all classes are supposed to - have weight one. - continue_training: when continue_training is True, once initialized - model will be continuely trained on every call of fit. - config: RunConfig object that controls the configurations of the - session, e.g. num_cores, gpu_memory_fraction, etc. - verbose: Controls the verbosity, possible values: - 0: the algorithm and debug information is muted. - 1: trainer prints the progress. - 2: log device placement is printed. -- - - - -#### `tf.contrib.learn.TensorFlowEstimator.__init__(model_fn, n_classes, batch_size=32, steps=200, optimizer='Adagrad', learning_rate=0.1, clip_gradients=5.0, class_weight=None, continue_training=False, config=None, verbose=1)` {#TensorFlowEstimator.__init__} - - - - -- - - - -#### `tf.contrib.learn.TensorFlowEstimator.evaluate(x=None, y=None, input_fn=None, steps=None)` {#TensorFlowEstimator.evaluate} - -See base class. - - -- - - - -#### `tf.contrib.learn.TensorFlowEstimator.fit(x, y, steps=None, monitors=None, logdir=None)` {#TensorFlowEstimator.fit} - -Builds a neural network model given provided `model_fn` and training -data X and y. - -Note: called first time constructs the graph and initializers -variables. Consecutives times it will continue training the same model. -This logic follows partial_fit() interface in scikit-learn. - -To restart learning, create new estimator. - -##### Args: - - -* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be - iterator that returns arrays of features. The training input - samples for fitting the model. - -* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be - iterator that returns array of targets. The training target values - (class labels in classification, real numbers in regression). - -* `steps`: int, number of steps to train. - If None or 0, train for `self.steps`. -* `monitors`: List of `BaseMonitor` objects to print training progress and - invoke early stopping. -* `logdir`: the directory to save the log file that can be used for - optional visualization. - -##### Returns: - - Returns self. - - -- - - - -#### `tf.contrib.learn.TensorFlowEstimator.get_params(deep=True)` {#TensorFlowEstimator.get_params} - -Get parameters for this estimator. - -##### Args: - - -* `deep`: boolean, optional - If True, will return the parameters for this estimator and - contained subobjects that are estimators. - -##### Returns: - - params : mapping of string to any - Parameter names mapped to their values. - - -- - - - -#### `tf.contrib.learn.TensorFlowEstimator.get_tensor(name)` {#TensorFlowEstimator.get_tensor} - -Returns tensor by name. - -##### Args: - - -* `name`: string, name of the tensor. - -##### Returns: - - Tensor. - - -- - - - -#### `tf.contrib.learn.TensorFlowEstimator.get_tensor_value(name)` {#TensorFlowEstimator.get_tensor_value} - -Returns value of the tensor give by name. - -##### Args: - - -* `name`: string, name of the tensor. - -##### Returns: - - Numpy array - value of the tensor. - - -- - - - -#### `tf.contrib.learn.TensorFlowEstimator.get_variable_names()` {#TensorFlowEstimator.get_variable_names} - -Returns list of all variable names in this model. - -##### Returns: - - List of names. - - -- - - - -#### `tf.contrib.learn.TensorFlowEstimator.model_dir` {#TensorFlowEstimator.model_dir} - - - - -- - - - -#### `tf.contrib.learn.TensorFlowEstimator.partial_fit(x, y)` {#TensorFlowEstimator.partial_fit} - -Incremental fit on a batch of samples. - -This method is expected to be called several times consecutively -on different or the same chunks of the dataset. This either can -implement iterative training or out-of-core/online training. - -This is especially useful when the whole dataset is too big to -fit in memory at the same time. Or when model is taking long time -to converge, and you want to split up training into subparts. - -##### Args: - - -* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be - iterator that returns arrays of features. The training input - samples for fitting the model. - -* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be - iterator that returns array of targets. The training target values - (class label in classification, real numbers in regression). - -##### Returns: - - Returns self. - - -- - - - -#### `tf.contrib.learn.TensorFlowEstimator.predict(x, axis=1, batch_size=None)` {#TensorFlowEstimator.predict} - -Predict class or regression for X. - -For a classification model, the predicted class for each sample in X is -returned. For a regression model, the predicted value based on X is -returned. - -##### Args: - - -* `x`: array-like matrix, [n_samples, n_features...] or iterator. -* `axis`: Which axis to argmax for classification. - By default axis 1 (next after batch) is used. - Use 2 for sequence predictions. -* `batch_size`: If test set is too big, use batch size to split - it into mini batches. By default the batch_size member - variable is used. - -##### Returns: - - -* `y`: array of shape [n_samples]. The predicted classes or predicted - value. - - -- - - - -#### `tf.contrib.learn.TensorFlowEstimator.predict_proba(x, batch_size=None)` {#TensorFlowEstimator.predict_proba} - -Predict class probability of the input samples X. - -##### Args: - - -* `x`: array-like matrix, [n_samples, n_features...] or iterator. -* `batch_size`: If test set is too big, use batch size to split - it into mini batches. By default the batch_size member variable is used. - -##### Returns: - - -* `y`: array of shape [n_samples, n_classes]. The predicted - probabilities for each class. - - -- - - - -#### `tf.contrib.learn.TensorFlowEstimator.restore(cls, path, config=None)` {#TensorFlowEstimator.restore} - -Restores model from give path. - -##### Args: - - -* `path`: Path to the checkpoints and other model information. -* `config`: RunConfig object that controls the configurations of the session, - e.g. num_cores, gpu_memory_fraction, etc. This is allowed to be - reconfigured. - -##### Returns: - - Estimator, object of the subclass of TensorFlowEstimator. - - -- - - - -#### `tf.contrib.learn.TensorFlowEstimator.save(path)` {#TensorFlowEstimator.save} - -Saves checkpoints and graph to given path. - -##### Args: - - -* `path`: Folder to save model to. - - -- - - - -#### `tf.contrib.learn.TensorFlowEstimator.set_params(**params)` {#TensorFlowEstimator.set_params} - -Set the parameters of this estimator. - -The method works on simple estimators as well as on nested objects -(such as pipelines). The former have parameters of the form -``__`` so that it's possible to update each -component of a nested object. - -##### Returns: - - self - - -- - - - -#### `tf.contrib.learn.TensorFlowEstimator.train(input_fn, steps, monitors=None)` {#TensorFlowEstimator.train} - -Trains a model given input builder function. - -##### Args: - - -* `input_fn`: Input builder function, returns tuple of dicts or - dict and Tensor. -* `steps`: number of steps to train model for. -* `monitors`: List of `BaseMonitor` subclass instances. Used for callbacks - inside the training loop. - -##### Returns: - - Returns self. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.extract_dask_labels.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.extract_dask_labels.md deleted file mode 100644 index 15831ce758..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.extract_dask_labels.md +++ /dev/null @@ -1,4 +0,0 @@ -### `tf.contrib.learn.extract_dask_labels(labels)` {#extract_dask_labels} - -Extract data from dask.Series for labels - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.metrics.confusion_matrix.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.metrics.confusion_matrix.md new file mode 100644 index 0000000000..a57fa44318 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.metrics.confusion_matrix.md @@ -0,0 +1,45 @@ +### `tf.contrib.metrics.confusion_matrix(predictions, labels, num_classes=None, name=None)` {#confusion_matrix} + +Computes the confusion matrix from predictions and labels + +Calculate the Confusion Matrix for a pair of prediction and +label 1-D int arrays. + +Considering a prediction array such as: `[1, 2, 3]` +And a label array such as: `[2, 2, 3]` + +##### The confusion matrix returned would be the following one: + + [[0, 0, 0] + [0, 1, 0] + [0, 1, 0] + [0, 0, 1]] + +Where the matrix rows represent the prediction labels and the columns +represents the real labels. The confusion matrix is always a 2-D array +of shape [n, n], where n is the number of valid labels for a given +classification task. Both prediction and labels must be 1-D arrays of +the same shape in order for this function to work. + +##### Args: + + +* `predictions`: A 1-D array represeting the predictions for a given + classification. +* `labels`: A 1-D represeting the real labels for the classification task. +* `num_classes`: The possible number of labels the classification task can + have. If this value is not provided, it will be calculated + using both predictions and labels array. +* `name`: Scope name. + +##### Returns: + + A l X l matrix represeting the confusion matrix, where l in the number of + possible labels in the classification task. + +##### Raises: + + +* `ValueError`: If both predictions and labels are not 1-D vectors and do not + have the same size. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.metrics.set_union.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.metrics.set_union.md deleted file mode 100644 index bb378fe2a2..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.metrics.set_union.md +++ /dev/null @@ -1,23 +0,0 @@ -### `tf.contrib.metrics.set_union(a, b, validate_indices=True)` {#set_union} - -Compute set union of elements in last dimension of `a` and `b`. - -All but the last dimension of `a` and `b` must match. - -##### Args: - - -* `a`: `Tensor` or `SparseTensor` of the same type as `b`. If sparse, indices - must be sorted in row-major order. -* `b`: `Tensor` or `SparseTensor` of the same type as `a`. Must be - `SparseTensor` if `a` is `SparseTensor`. If sparse, indices must be - sorted in row-major order. -* `validate_indices`: Whether to validate the order and range of sparse indices - in `a` and `b`. - -##### Returns: - - A `SparseTensor` with the same rank as `a` and `b`, and all but the last - dimension the same. Elements along the last dimension contain the - unions. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.metrics.streaming_mean_cosine_distance.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.metrics.streaming_mean_cosine_distance.md new file mode 100644 index 0000000000..1900cd1a97 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.metrics.streaming_mean_cosine_distance.md @@ -0,0 +1,48 @@ +### `tf.contrib.metrics.streaming_mean_cosine_distance(predictions, labels, dim, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_mean_cosine_distance} + +Computes the cosine distance between the labels and predictions. + +The `streaming_mean_cosine_distance` function creates two local variables, +`total` and `count` that are used to compute the average cosine distance +between `predictions` and `labels`. This average is ultimately returned as +`mean_distance` which is an idempotent operation that simply divides `total` +by `count. To facilitate the estimation of a mean over multiple batches +of data, the function creates an `update_op` operation whose behavior is +dependent on the value of `weights`. If `weights` is None, then `update_op` +increments `total` with the reduced sum of `values and increments `count` with +the number of elements in `values`. If `weights` is not `None`, then +`update_op` increments `total` with the reduced sum of the product of `values` +and `weights` and increments `count` with the reduced sum of weights. + +##### Args: + + +* `predictions`: A tensor of the same size as labels. +* `labels`: A tensor of arbitrary size. +* `dim`: The dimension along which the cosine distance is computed. +* `weights`: An optional set of weights which indicates which predictions to + ignore during metric computation. Its size matches that of labels except + for the value of 'dim' which should be 1. For example if labels has + dimensions [32, 100, 200, 3], then `weights` should have dimensions + [32, 100, 200, 1]. +* `metrics_collections`: An optional list of collections that the metric + value variable should be added to. +* `updates_collections`: An optional list of collections that the metric update + ops should be added to. +* `name`: An optional variable_op_scope name. + +##### Returns: + + +* `mean_distance`: A tensor representing the current mean, the value of `total` + divided by `count`. +* `update_op`: An operation that increments the `total` and `count` variables + appropriately. + +##### Raises: + + +* `ValueError`: If labels and predictions are of different sizes or if the + ignore_mask is of the wrong size or if either `metrics_collections` or + `updates_collections` are not a list or tuple. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.metrics.streaming_mean_squared_error.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.metrics.streaming_mean_squared_error.md deleted file mode 100644 index 6d682d0427..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.metrics.streaming_mean_squared_error.md +++ /dev/null @@ -1,48 +0,0 @@ -### `tf.contrib.metrics.streaming_mean_squared_error(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_mean_squared_error} - -Computes the mean squared error between the labels and predictions. - -The `streaming_mean_squared_error` function creates two local variables, -`total` and `count` that are used to compute the mean squared error. -This average is ultimately returned as `mean_squared_error`: an idempotent -operation that simply divides `total` by `count`. To facilitate the estimation -of the mean squared error over a stream of data, the function utilizes two -operations. First, a `squared_error` operation computes the element-wise -square of the difference between `predictions` and `labels`. Second, an -`update_op` operation whose behavior is dependent on the value of `weights`. -If `weights` is None, then `update_op` increments `total` with the -reduced sum of `squared_error` and increments `count` with the number of -elements in `squared_error`. If `weights` is not `None`, then `update_op` -increments `total` with the reduced sum of the product of `weights` and -`squared_error` and increments `count` with the reduced sum of `weights`. In -addition to performing the updates, `update_op` also returns the -`mean_squared_error` value. - -##### Args: - - -* `predictions`: A `Tensor` of arbitrary shape. -* `labels`: A `Tensor` of the same shape as `predictions`. -* `weights`: An optional set of weights of the same shape as `predictions`. If - `weights` is not None, the function computes a weighted mean. -* `metrics_collections`: An optional list of collections that - `mean_squared_error` should be added to. -* `updates_collections`: An optional list of collections that `update_op` should - be added to. -* `name`: An optional variable_op_scope name. - -##### Returns: - - -* `mean_squared_error`: A tensor representing the current mean, the value of - `total` divided by `count`. -* `update_op`: An operation that increments the `total` and `count` variables - appropriately and whose value matches `mean_squared_error`. - -##### Raises: - - -* `ValueError`: If `weights` is not `None` and its shape doesn't match - `predictions` or if either `metrics_collections` or `updates_collections` - are not a list or tuple. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.util.constant_value.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.util.constant_value.md new file mode 100644 index 0000000000..58ba7b0abb --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.util.constant_value.md @@ -0,0 +1,31 @@ +### `tf.contrib.util.constant_value(tensor)` {#constant_value} + +Returns the constant value of the given tensor, if efficiently calculable. + +This function attempts to partially evaluate the given tensor, and +returns its value as a numpy ndarray if this succeeds. + +TODO(mrry): Consider whether this function should use a registration +mechanism like gradients and ShapeFunctions, so that it is easily +extensible. + +NOTE: If `constant_value(tensor)` returns a non-`None` result, it will no +longer be possible to feed a different value for `tensor`. This allows the +result of this function to influence the graph that is constructed, and +permits static shape optimizations. + +##### Args: + + +* `tensor`: The Tensor to be evaluated. + +##### Returns: + + A numpy ndarray containing the constant value of the given `tensor`, + or None if it cannot be calculated. + +##### Raises: + + +* `TypeError`: if tensor is not an ops.Tensor. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.util.ops_used_by_graph_def.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.util.ops_used_by_graph_def.md new file mode 100644 index 0000000000..38a9cc4f43 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.util.ops_used_by_graph_def.md @@ -0,0 +1,15 @@ +### `tf.contrib.util.ops_used_by_graph_def(graph_def)` {#ops_used_by_graph_def} + +Collect the list of ops used by a graph. + +Does not validate that the ops are all registered. + +##### Args: + + +* `graph_def`: A `GraphDef` proto, as from `graph.as_graph_def()`. + +##### Returns: + + A list of strings, each naming an op used by the graph. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.convert_to_tensor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.convert_to_tensor.md new file mode 100644 index 0000000000..29902ed467 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.convert_to_tensor.md @@ -0,0 +1,47 @@ +### `tf.convert_to_tensor(value, dtype=None, name=None, as_ref=False)` {#convert_to_tensor} + +Converts the given `value` to a `Tensor`. + +This function converts Python objects of various types to `Tensor` +objects. It accepts `Tensor` objects, numpy arrays, Python lists, +and Python scalars. For example: + +```python +import numpy as np + +def my_func(arg): + arg = tf.convert_to_tensor(arg, dtype=tf.float32) + return tf.matmul(arg, arg) + arg + +# The following calls are equivalent. +value_1 = my_func(tf.constant([[1.0, 2.0], [3.0, 4.0]])) +value_2 = my_func([[1.0, 2.0], [3.0, 4.0]]) +value_3 = my_func(np.array([[1.0, 2.0], [3.0, 4.0]], dtype=np.float32)) +``` + +This function can be useful when composing a new operation in Python +(such as `my_func` in the example above). All standard Python op +constructors apply this function to each of their Tensor-valued +inputs, which allows those ops to accept numpy arrays, Python lists, +and scalars in addition to `Tensor` objects. + +##### Args: + + +* `value`: An object whose type has a registered `Tensor` conversion function. +* `dtype`: Optional element type for the returned tensor. If missing, the + type is inferred from the type of `value`. +* `name`: Optional name to use if a new `Tensor` is created. +* `as_ref`: True if we want the result as a ref tensor. Only used if a new + `Tensor` is created. + +##### Returns: + + A `Tensor` based on `value`. + +##### Raises: + + +* `TypeError`: If no conversion function is registered for `value`. +* `RuntimeError`: If a registered conversion function returns an invalid value. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.device.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.device.md new file mode 100644 index 0000000000..2a5e33203d --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.device.md @@ -0,0 +1,19 @@ +### `tf.device(device_name_or_function)` {#device} + +Wrapper for `Graph.device()` using the default graph. + +See +[`Graph.device()`](../../api_docs/python/framework.md#Graph.device) +for more details. + +##### Args: + + +* `device_name_or_function`: The device name or function to use in + the context. + +##### Returns: + + A context manager that specifies the default device to use for newly + created ops. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.dynamic_partition.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.dynamic_partition.md deleted file mode 100644 index 3fbb885055..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.dynamic_partition.md +++ /dev/null @@ -1,50 +0,0 @@ -### `tf.dynamic_partition(data, partitions, num_partitions, name=None)` {#dynamic_partition} - -Partitions `data` into `num_partitions` tensors using indices from `partitions`. - -For each index tuple `js` of size `partitions.ndim`, the slice `data[js, ...]` -becomes part of `outputs[partitions[js]]`. The slices with `partitions[js] = i` -are placed in `outputs[i]` in lexicographic order of `js`, and the first -dimension of `outputs[i]` is the number of entries in `partitions` equal to `i`. -In detail, - - outputs[i].shape = [sum(partitions == i)] + data.shape[partitions.ndim:] - - outputs[i] = pack([data[js, ...] for js if partitions[js] == i]) - -`data.shape` must start with `partitions.shape`. - -For example: - - # Scalar partitions - partitions = 1 - num_partitions = 2 - data = [10, 20] - outputs[0] = [] # Empty with shape [0, 2] - outputs[1] = [[10, 20]] - - # Vector partitions - partitions = [0, 0, 1, 1, 0] - num_partitions = 2 - data = [10, 20, 30, 40, 50] - outputs[0] = [10, 20, 50] - outputs[1] = [30, 40] - -
- -
- -##### Args: - - -* `data`: A `Tensor`. -* `partitions`: A `Tensor` of type `int32`. - Any shape. Indices in the range `[0, num_partitions)`. -* `num_partitions`: An `int` that is `>= 1`. - The number of partitions to output. -* `name`: A name for the operation (optional). - -##### Returns: - - A list of `num_partitions` `Tensor` objects of the same type as data. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.errors.CancelledError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.errors.CancelledError.md deleted file mode 100644 index cf20c0e2e3..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.errors.CancelledError.md +++ /dev/null @@ -1,17 +0,0 @@ -Raised when an operation or step is cancelled. - -For example, a long-running operation (e.g. -[`queue.enqueue()`](../../api_docs/python/io_ops.md#QueueBase.enqueue) may be -cancelled by running another operation (e.g. -[`queue.close(cancel_pending_enqueues=True)`](../../api_docs/python/io_ops.md#QueueBase.close), -or by [closing the session](../../api_docs/python/client.md#Session.close). -A step that is running such a long-running operation will fail by raising -`CancelledError`. - -- - - - -#### `tf.errors.CancelledError.__init__(node_def, op, message)` {#CancelledError.__init__} - -Creates a `CancelledError`. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.errors.DeadlineExceededError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.errors.DeadlineExceededError.md new file mode 100644 index 0000000000..e8ef3be06e --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.errors.DeadlineExceededError.md @@ -0,0 +1,11 @@ +Raised when a deadline expires before an operation could complete. + +This exception is not currently used. + +- - - + +#### `tf.errors.DeadlineExceededError.__init__(node_def, op, message)` {#DeadlineExceededError.__init__} + +Creates a `DeadlineExceededError`. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.errors.OutOfRangeError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.errors.OutOfRangeError.md deleted file mode 100644 index ef996b0a88..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.errors.OutOfRangeError.md +++ /dev/null @@ -1,15 +0,0 @@ -Raised when an operation iterates past the valid input range. - -This exception is raised in "end-of-file" conditions, such as when a -[`queue.dequeue()`](../../api_docs/python/io_ops.md#QueueBase.dequeue) -operation is blocked on an empty queue, and a -[`queue.close()`](../../api_docs/python/io_ops.md#QueueBase.close) -operation executes. - -- - - - -#### `tf.errors.OutOfRangeError.__init__(node_def, op, message)` {#OutOfRangeError.__init__} - -Creates an `OutOfRangeError`. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.errors.PermissionDeniedError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.errors.PermissionDeniedError.md deleted file mode 100644 index a8a81494c8..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.errors.PermissionDeniedError.md +++ /dev/null @@ -1,14 +0,0 @@ -Raised when the caller does not have permission to run an operation. - -For example, running the -[`tf.WholeFileReader.read()`](../../api_docs/python/io_ops.md#WholeFileReader) -operation could raise `PermissionDeniedError` if it receives the name of a -file for which the user does not have the read file permission. - -- - - - -#### `tf.errors.PermissionDeniedError.__init__(node_def, op, message)` {#PermissionDeniedError.__init__} - -Creates a `PermissionDeniedError`. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.errors.UnimplementedError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.errors.UnimplementedError.md deleted file mode 100644 index 945daa1a22..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.errors.UnimplementedError.md +++ /dev/null @@ -1,15 +0,0 @@ -Raised when an operation has not been implemented. - -Some operations may raise this error when passed otherwise-valid -arguments that it does not currently support. For example, running -the [`tf.nn.max_pool()`](../../api_docs/python/nn.md#max_pool) operation -would raise this error if pooling was requested on the batch dimension, -because this is not yet supported. - -- - - - -#### `tf.errors.UnimplementedError.__init__(node_def, op, message)` {#UnimplementedError.__init__} - -Creates an `UnimplementedError`. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.fft3d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.fft3d.md new file mode 100644 index 0000000000..7214e3ae20 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.fft3d.md @@ -0,0 +1,14 @@ +### `tf.fft3d(input, name=None)` {#fft3d} + +Compute the 3-dimensional discrete Fourier Transform. + +##### Args: + + +* `input`: A `Tensor` of type `complex64`. A complex64 3-D tensor. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `complex64`. The 3D Fourier Transform of `input`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.gather.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.gather.md deleted file mode 100644 index f3ae59bbb6..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.gather.md +++ /dev/null @@ -1,35 +0,0 @@ -### `tf.gather(params, indices, validate_indices=None, name=None)` {#gather} - -Gather slices from `params` according to `indices`. - -`indices` must be an integer tensor of any dimension (usually 0-D or 1-D). -Produces an output tensor with shape `indices.shape + params.shape[1:]` where: - - # Scalar indices - output[:, ..., :] = params[indices, :, ... :] - - # Vector indices - output[i, :, ..., :] = params[indices[i], :, ... :] - - # Higher rank indices - output[i, ..., j, :, ... :] = params[indices[i, ..., j], :, ..., :] - -If `indices` is a permutation and `len(indices) == params.shape[0]` then -this operation will permute `params` accordingly. - -
- -
- -##### Args: - - -* `params`: A `Tensor`. -* `indices`: A `Tensor`. Must be one of the following types: `int32`, `int64`. -* `validate_indices`: An optional `bool`. Defaults to `True`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `params`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.get_collection_ref.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.get_collection_ref.md deleted file mode 100644 index c393da2233..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.get_collection_ref.md +++ /dev/null @@ -1,20 +0,0 @@ -### `tf.get_collection_ref(key)` {#get_collection_ref} - -Wrapper for `Graph.get_collection_ref()` using the default graph. - -See [`Graph.get_collection_ref()`](../../api_docs/python/framework.md#Graph.get_collection_ref) -for more details. - -##### Args: - - -* `key`: The key for the collection. For example, the `GraphKeys` class - contains many standard names for collections. - -##### Returns: - - The list of values in the collection with the given `name`, or an empty - list if no value has been added to that collection. Note that this returns - the collection list itself, which can be modified in place to change the - collection. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.get_seed.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.get_seed.md new file mode 100644 index 0000000000..ccf6712418 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.get_seed.md @@ -0,0 +1,22 @@ +### `tf.get_seed(op_seed)` {#get_seed} + +Returns the local seeds an operation should use given an op-specific seed. + +Given operation-specific seed, `op_seed`, this helper function returns two +seeds derived from graph-level and op-level seeds. Many random operations +internally use the two seeds to allow user to change the seed globally for a +graph, or for only specific operations. + +For details on how the graph-level seed interacts with op seeds, see +[`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed). + +##### Args: + + +* `op_seed`: integer. + +##### Returns: + + A tuple of two integers that should be used for the local seed of this + operation. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.get_session_handle.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.get_session_handle.md new file mode 100644 index 0000000000..3fdd2b0ae9 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.get_session_handle.md @@ -0,0 +1,38 @@ +### `tf.get_session_handle(data, name=None)` {#get_session_handle} + +Return the handle of `data`. + +This is EXPERIMENTAL and subject to change. + +Keep `data` "in-place" in the runtime and create a handle that can be +used to retrieve `data` in a subsequent run(). + +Combined with `get_session_tensor`, we can keep a tensor produced in +one run call in place, and use it as the input in a future run call. +Below is a simple example: + +```python +c = tf.mul(a, b) +h = tf.get_session_handle(c) +h = sess.run(h) + +p, a = tf.get_session_tensor(tf.float32) +b = tf.mul(a, 10) +c = sess.run(b, feed_dict={p: h.handle}) +``` + +##### Args: + + +* `data`: A tensor to be stored in the session. +* `name`: Optional name prefix for the return tensor. + +##### Returns: + + A scalar string tensor representing a unique handle for `data`. + +##### Raises: + + +* `TypeError`: if `data` is not a Tensor. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.get_session_tensor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.get_session_tensor.md deleted file mode 100644 index 215647a989..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.get_session_tensor.md +++ /dev/null @@ -1,22 +0,0 @@ -### `tf.get_session_tensor(dtype, name=None)` {#get_session_tensor} - -Get the tensor of type `dtype` by feeding a tensor handle. - -This is EXPERIMENTAL and subject to change. - -Get the value of the tensor from a tensor handle. The tensor -is produced in a previous run() and stored in the state of the -session. - -##### Args: - - -* `dtype`: The type of the output tensor. -* `name`: Optional name prefix for the return tensor. - -##### Returns: - - A pair of tensors. The first is a placeholder for feeding a - tensor handle and the second is the tensor in the session state - keyed by the tensor handle. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.global_norm.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.global_norm.md deleted file mode 100644 index d37d4228b2..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.global_norm.md +++ /dev/null @@ -1,27 +0,0 @@ -### `tf.global_norm(t_list, name=None)` {#global_norm} - -Computes the global norm of multiple tensors. - -Given a tuple or list of tensors `t_list`, this operation returns the -global norm of the elements in all tensors in `t_list`. The global norm is -computed as: - -`global_norm = sqrt(sum([l2norm(t)**2 for t in t_list]))` - -Any entries in `t_list` that are of type None are ignored. - -##### Args: - - -* `t_list`: A tuple or list of mixed `Tensors`, `IndexedSlices`, or None. -* `name`: A name for the operation (optional). - -##### Returns: - - A 0-D (scalar) `Tensor` of type `float`. - -##### Raises: - - -* `TypeError`: If `t_list` is not a sequence. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.identity.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.identity.md new file mode 100644 index 0000000000..13f1318601 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.identity.md @@ -0,0 +1,14 @@ +### `tf.identity(input, name=None)` {#identity} + +Return a tensor with the same shape and contents as the input tensor or value. + +##### Args: + + +* `input`: A `Tensor`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `input`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.ifft2d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.ifft2d.md deleted file mode 100644 index 0ca8eb8dc1..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.ifft2d.md +++ /dev/null @@ -1,15 +0,0 @@ -### `tf.ifft2d(input, name=None)` {#ifft2d} - -Compute the inverse 2-dimensional discrete Fourier Transform. - -##### Args: - - -* `input`: A `Tensor` of type `complex64`. A complex64 matrix. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `complex64`. - The inverse 2D Fourier Transform of `input`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.ifft3d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.ifft3d.md new file mode 100644 index 0000000000..35d58888ac --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.ifft3d.md @@ -0,0 +1,15 @@ +### `tf.ifft3d(input, name=None)` {#ifft3d} + +Compute the inverse 3-dimensional discrete Fourier Transform. + +##### Args: + + +* `input`: A `Tensor` of type `complex64`. A complex64 3-D tensor. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `complex64`. + The inverse 3D Fourier Transform of `input`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.igammac.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.igammac.md new file mode 100644 index 0000000000..1b739bcfca --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.igammac.md @@ -0,0 +1,29 @@ +### `tf.igammac(a, x, name=None)` {#igammac} + +Compute the upper regularized incomplete Gamma function `Q(a, x)`. + +The upper regularized incomplete Gamma function is defined as: + +``` +Q(a, x) = Gamma(a, x) / Gamma(x) = 1 - P(a, x) +``` +where +``` +Gamma(a, x) = int_{x}^{\infty} t^{a-1} exp(-t) dt +``` +is the upper incomplete Gama function. + +Note, above `P(a, x)` (`Igamma`) is the lower regularized complete +Gamma function. + +##### Args: + + +* `a`: A `Tensor`. Must be one of the following types: `float32`, `float64`. +* `x`: A `Tensor`. Must have the same type as `a`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `a`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.imag.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.imag.md new file mode 100644 index 0000000000..1dfcadbb95 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.imag.md @@ -0,0 +1,27 @@ +### `tf.imag(input, name=None)` {#imag} + +Returns the imaginary part of a complex number. + +Given a tensor `input` of complex numbers, this operation returns a tensor of +type `float` or `double` that is the imaginary part of each element in +`input`. All elements in `input` must be complex numbers of the form \(a + +bj\), where *a* is the real part and *b* is the imaginary part returned by +this operation. + +For example: + +``` +# tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j] +tf.imag(input) ==> [4.75, 5.75] +``` + +##### Args: + + +* `input`: A `Tensor`. Must be one of the following types: `complex64`, `complex128`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `float` or `double`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.adjust_saturation.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.adjust_saturation.md new file mode 100644 index 0000000000..1829271ff6 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.adjust_saturation.md @@ -0,0 +1,25 @@ +### `tf.image.adjust_saturation(image, saturation_factor, name=None)` {#adjust_saturation} + +Adjust saturation of an RGB image. + +This is a convenience method that converts an RGB image to float +representation, converts it to HSV, add an offset to the saturation channel, +converts back to RGB and then back to the original data type. If several +adjustments are chained it is advisable to minimize the number of redundant +conversions. + +`image` is an RGB image. The image saturation is adjusted by converting the +image to HSV and multiplying the saturation (S) channel by +`saturation_factor` and clipping. The image is then converted back to RGB. + +##### Args: + + +* `image`: RGB image or images. Size of the last dimension must be 3. +* `saturation_factor`: float. Factor to multiply the saturation by. +* `name`: A name for this operation (optional). + +##### Returns: + + Adjusted image(s), same shape and DType as `image`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.convert_image_dtype.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.convert_image_dtype.md new file mode 100644 index 0000000000..63db6f36a9 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.convert_image_dtype.md @@ -0,0 +1,32 @@ +### `tf.image.convert_image_dtype(image, dtype, saturate=False, name=None)` {#convert_image_dtype} + +Convert `image` to `dtype`, scaling its values if needed. + +Images that are represented using floating point values are expected to have +values in the range [0,1). Image data stored in integer data types are +expected to have values in the range `[0,MAX]`, where `MAX` is the largest +positive representable number for the data type. + +This op converts between data types, scaling the values appropriately before +casting. + +Note that converting from floating point inputs to integer types may lead to +over/underflow problems. Set saturate to `True` to avoid such problem in +problematic conversions. If enabled, saturation will clip the output into the +allowed range before performing a potentially dangerous cast (and only before +performing such a cast, i.e., when casting from a floating point to an integer +type, and when casting from a signed to an unsigned type; `saturate` has no +effect on casts between floats, or on casts that increase the type's range). + +##### Args: + + +* `image`: An image. +* `dtype`: A `DType` to convert `image` to. +* `saturate`: If `True`, clip the input before casting (if necessary). +* `name`: A name for this operation (optional). + +##### Returns: + + `image`, converted to `dtype`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.flip_left_right.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.flip_left_right.md new file mode 100644 index 0000000000..ac8c99806e --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.flip_left_right.md @@ -0,0 +1,23 @@ +### `tf.image.flip_left_right(image)` {#flip_left_right} + +Flip an image horizontally (left to right). + +Outputs the contents of `image` flipped along the second dimension, which is +`width`. + +See also `reverse()`. + +##### Args: + + +* `image`: A 3-D tensor of shape `[height, width, channels].` + +##### Returns: + + A 3-D tensor of the same type and shape as `image`. + +##### Raises: + + +* `ValueError`: if the shape of `image` not supported. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.flip_up_down.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.flip_up_down.md deleted file mode 100644 index ed92277f8a..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.flip_up_down.md +++ /dev/null @@ -1,23 +0,0 @@ -### `tf.image.flip_up_down(image)` {#flip_up_down} - -Flip an image horizontally (upside down). - -Outputs the contents of `image` flipped along the first dimension, which is -`height`. - -See also `reverse()`. - -##### Args: - - -* `image`: A 3-D tensor of shape `[height, width, channels].` - -##### Returns: - - A 3-D tensor of the same type and shape as `image`. - -##### Raises: - - -* `ValueError`: if the shape of `image` not supported. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.pad_to_bounding_box.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.pad_to_bounding_box.md new file mode 100644 index 0000000000..04c155c03c --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.pad_to_bounding_box.md @@ -0,0 +1,30 @@ +### `tf.image.pad_to_bounding_box(image, offset_height, offset_width, target_height, target_width)` {#pad_to_bounding_box} + +Pad `image` with zeros to the specified `height` and `width`. + +Adds `offset_height` rows of zeros on top, `offset_width` columns of +zeros on the left, and then pads the image on the bottom and right +with zeros until it has dimensions `target_height`, `target_width`. + +This op does nothing if `offset_*` is zero and the image already has size +`target_height` by `target_width`. + +##### Args: + + +* `image`: 3-D tensor with shape `[height, width, channels]` +* `offset_height`: Number of rows of zeros to add on top. +* `offset_width`: Number of columns of zeros to add on the left. +* `target_height`: Height of output image. +* `target_width`: Width of output image. + +##### Returns: + + 3-D tensor of shape `[target_height, target_width, channels]` + +##### Raises: + + +* `ValueError`: If the shape of `image` is incompatible with the `offset_*` or + `target_*` arguments + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.resize_image_with_crop_or_pad.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.resize_image_with_crop_or_pad.md new file mode 100644 index 0000000000..c93111bd99 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.resize_image_with_crop_or_pad.md @@ -0,0 +1,30 @@ +### `tf.image.resize_image_with_crop_or_pad(image, target_height, target_width)` {#resize_image_with_crop_or_pad} + +Crops and/or pads an image to a target width and height. + +Resizes an image to a target width and height by either centrally +cropping the image or padding it evenly with zeros. + +If `width` or `height` is greater than the specified `target_width` or +`target_height` respectively, this op centrally crops along that dimension. +If `width` or `height` is smaller than the specified `target_width` or +`target_height` respectively, this op centrally pads with 0 along that +dimension. + +##### Args: + + +* `image`: 3-D tensor of shape [height, width, channels] +* `target_height`: Target height. +* `target_width`: Target width. + +##### Raises: + + +* `ValueError`: if `target_height` or `target_width` are zero or negative. + +##### Returns: + + Cropped and/or padded image of shape + `[target_height, target_width, channels]` + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.resize_nearest_neighbor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.resize_nearest_neighbor.md new file mode 100644 index 0000000000..ba72e73ebd --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.resize_nearest_neighbor.md @@ -0,0 +1,22 @@ +### `tf.image.resize_nearest_neighbor(images, size, align_corners=None, name=None)` {#resize_nearest_neighbor} + +Resize `images` to `size` using nearest neighbor interpolation. + +##### Args: + + +* `images`: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int16`, `int32`, `int64`, `half`, `float32`, `float64`. + 4-D with shape `[batch, height, width, channels]`. +* `size`: A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The + new size for the images. +* `align_corners`: An optional `bool`. Defaults to `False`. + If true, rescale input by (new_height - 1) / (height - 1), which + exactly aligns the 4 corners of images and resized images. If false, rescale + by new_height / height. Treat similarly the width dimension. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `images`. 4-D with shape + `[batch, new_height, new_width, channels]`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.rgb_to_grayscale.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.rgb_to_grayscale.md new file mode 100644 index 0000000000..bf9b6846e0 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.rgb_to_grayscale.md @@ -0,0 +1,19 @@ +### `tf.image.rgb_to_grayscale(images, name=None)` {#rgb_to_grayscale} + +Converts one or more images from RGB to Grayscale. + +Outputs a tensor of the same `DType` and rank as `images`. The size of the +last dimension of the output is 1, containing the Grayscale value of the +pixels. + +##### Args: + + +* `images`: The RGB tensor to convert. Last dimension must have size 3 and + should contain RGB values. +* `name`: A name for the operation (optional). + +##### Returns: + + The converted grayscale image(s). + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.rgb_to_hsv.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.rgb_to_hsv.md new file mode 100644 index 0000000000..7c5d05f515 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.rgb_to_hsv.md @@ -0,0 +1,23 @@ +### `tf.image.rgb_to_hsv(images, name=None)` {#rgb_to_hsv} + +Converts one or more images from RGB to HSV. + +Outputs a tensor of the same shape as the `images` tensor, containing the HSV +value of the pixels. The output is only well defined if the value in `images` +are in `[0,1]`. + +`output[..., 0]` contains hue, `output[..., 1]` contains saturation, and +`output[..., 2]` contains value. All HSV values are in `[0,1]`. A hue of 0 +corresponds to pure red, hue 1/3 is pure green, and 2/3 is pure blue. + +##### Args: + + +* `images`: A `Tensor` of type `float32`. + 1-D or higher rank. RGB data to convert. Last dimension must be size 3. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `float32`. `images` converted to HSV. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.is_non_decreasing.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.is_non_decreasing.md deleted file mode 100644 index f10ff932c0..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.is_non_decreasing.md +++ /dev/null @@ -1,25 +0,0 @@ -### `tf.is_non_decreasing(x, name=None)` {#is_non_decreasing} - -Returns `True` if `x` is non-decreasing. - -Elements of `x` are compared in row-major order. The tensor `[x[0],...]` -is non-decreasing if for every adjacent pair we have `x[i] <= x[i+1]`. -If `x` has less than two elements, it is trivially non-decreasing. - -See also: `is_strictly_increasing` - -##### Args: - - -* `x`: Numeric `Tensor`. -* `name`: A name for this operation (optional). Defaults to "is_non_decreasing" - -##### Returns: - - Boolean `Tensor`, equal to `True` iff `x` is non-decreasing. - -##### Raises: - - -* `TypeError`: if `x` is not a numeric tensor. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.lgamma.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.lgamma.md deleted file mode 100644 index 2b8fda7dee..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.lgamma.md +++ /dev/null @@ -1,14 +0,0 @@ -### `tf.lgamma(x, name=None)` {#lgamma} - -Computes the log of the absolute value of `Gamma(x)` element-wise. - -##### Args: - - -* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `x`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.linspace.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.linspace.md deleted file mode 100644 index 570845f502..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.linspace.md +++ /dev/null @@ -1,28 +0,0 @@ -### `tf.linspace(start, stop, num, name=None)` {#linspace} - -Generates values in an interval. - -A sequence of `num` evenly-spaced values are generated beginning at `start`. -If `num > 1`, the values in the sequence increase by `stop - start / num - 1`, -so that the last one is exactly `stop`. - -For example: - -``` -tf.linspace(10.0, 12.0, 3, name="linspace") => [ 10.0 11.0 12.0] -``` - -##### Args: - - -* `start`: A `Tensor`. Must be one of the following types: `float32`, `float64`. - First entry in the range. -* `stop`: A `Tensor`. Must have the same type as `start`. - Last entry in the range. -* `num`: A `Tensor` of type `int32`. Number of values to generate. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `start`. 1-D. The generated values. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.logical_and.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.logical_and.md new file mode 100644 index 0000000000..dd5b563c8b --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.logical_and.md @@ -0,0 +1,15 @@ +### `tf.logical_and(x, y, name=None)` {#logical_and} + +Returns the truth value of x AND y element-wise. + +##### Args: + + +* `x`: A `Tensor` of type `bool`. +* `y`: A `Tensor` of type `bool`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `bool`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.logical_or.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.logical_or.md deleted file mode 100644 index be18e65e92..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.logical_or.md +++ /dev/null @@ -1,15 +0,0 @@ -### `tf.logical_or(x, y, name=None)` {#logical_or} - -Returns the truth value of x OR y element-wise. - -##### Args: - - -* `x`: A `Tensor` of type `bool`. -* `y`: A `Tensor` of type `bool`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `bool`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.map_fn.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.map_fn.md deleted file mode 100644 index 1892d7b03c..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.map_fn.md +++ /dev/null @@ -1,42 +0,0 @@ -### `tf.map_fn(fn, elems, dtype=None, parallel_iterations=10, back_prop=True, swap_memory=False, name=None)` {#map_fn} - -map on the list of tensors unpacked from `elems` on dimension 0. - -This map operator repeatedly applies the callable `fn` to a sequence of -elements from first to last. The elements are made of the tensors unpacked -from `elems`. `dtype` is the data type of the return value of `fn`. Users -must provide `dtype` if it is different from the data type of `elems`. - -Suppose that `elems` is unpacked into `values`, a list of tensors. The shape -of the result tensor is `[len(values)] + fn(values[0]).shape`. - -##### Args: - - -* `fn`: The callable to be performed. -* `elems`: A tensor to be unpacked to apply `fn`. -* `dtype`: (optional) The output type of `fn`. -* `parallel_iterations`: (optional) The number of iterations allowed to run - in parallel. -* `back_prop`: (optional) True enables back propagation. -* `swap_memory`: (optional) True enables GPU-CPU memory swapping. -* `name`: (optional) Name prefix for the returned tensors. - -##### Returns: - - A tensor that packs the results of applying `fn` to the list of tensors - unpacked from `elems`, from first to last. - -##### Raises: - - -* `TypeError`: if `fn` is not callable. - -##### Example: - - ```python - elems = [1, 2, 3, 4, 5, 6] - squares = map_fn(lambda x: x * x, elems) - # squares == [1, 4, 9, 16, 25, 36] - ``` - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.maximum.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.maximum.md deleted file mode 100644 index 309946f435..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.maximum.md +++ /dev/null @@ -1,15 +0,0 @@ -### `tf.maximum(x, y, name=None)` {#maximum} - -Returns the max of x and y (i.e. x > y ? x : y) element-wise, broadcasts. - -##### Args: - - -* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`. -* `y`: A `Tensor`. Must have the same type as `x`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `x`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.merge_summary.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.merge_summary.md new file mode 100644 index 0000000000..b61a501c2d --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.merge_summary.md @@ -0,0 +1,26 @@ +### `tf.merge_summary(inputs, collections=None, name=None)` {#merge_summary} + +Merges summaries. + +This op creates a +[`Summary`](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto) +protocol buffer that contains the union of all the values in the input +summaries. + +When the Op is run, it reports an `InvalidArgument` error if multiple values +in the summaries to merge use the same tag. + +##### Args: + + +* `inputs`: A list of `string` `Tensor` objects containing serialized `Summary` + protocol buffers. +* `collections`: Optional list of graph collections keys. The new summary op is + added to these collections. Defaults to `[GraphKeys.SUMMARIES]`. +* `name`: A name for the operation (optional). + +##### Returns: + + A scalar `Tensor` of type `string`. The serialized `Summary` protocol + buffer resulting from the merging. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.mul.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.mul.md deleted file mode 100644 index 3d6fa56864..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.mul.md +++ /dev/null @@ -1,15 +0,0 @@ -### `tf.mul(x, y, name=None)` {#mul} - -Returns x * y element-wise. - -##### Args: - - -* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`. -* `y`: A `Tensor`. Must have the same type as `x`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `x`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.neg.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.neg.md deleted file mode 100644 index 519fd9a875..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.neg.md +++ /dev/null @@ -1,16 +0,0 @@ -### `tf.neg(x, name=None)` {#neg} - -Computes numerical negative value element-wise. - -I.e., \\(y = -x\\). - -##### Args: - - -* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `x`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.atrous_conv2d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.atrous_conv2d.md deleted file mode 100644 index cf4c473689..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.atrous_conv2d.md +++ /dev/null @@ -1,107 +0,0 @@ -### `tf.nn.atrous_conv2d(value, filters, rate, padding, name=None)` {#atrous_conv2d} - -Atrous convolution (a.k.a. convolution with holes or dilated convolution). - -Computes a 2-D atrous convolution, also known as convolution with holes or -dilated convolution, given 4-D `value` and `filters` tensors. If the `rate` -parameter is equal to one, it performs regular 2-D convolution. If the `rate` -parameter is greater than one, it performs convolution with holes, sampling -the input values every `rate` pixels in the `height` and `width` dimensions. -This is equivalent to convolving the input with a set of upsampled filters, -produced by inserting `rate - 1` zeros between two consecutive values of the -filters along the `height` and `width` dimensions, hence the name atrous -convolution or convolution with holes (the French word trous means holes in -English). - -More specifically: - - output[b, i, j, k] = sum_{di, dj, q} filters[di, dj, q, k] * - value[b, i + rate * di, j + rate * dj, q] - -Atrous convolution allows us to explicitly control how densely to compute -feature responses in fully convolutional networks. Used in conjunction with -bilinear interpolation, it offers an alternative to `conv2d_transpose` in -dense prediction tasks such as semantic image segmentation, optical flow -computation, or depth estimation. It also allows us to effectively enlarge -the field of view of filters without increasing the number of parameters or -the amount of computation. - -For a description of atrous convolution and how it can be used for dense -feature extraction, please see: [Semantic Image Segmentation with Deep -Convolutional Nets and Fully Connected CRFs](http://arxiv.org/abs/1412.7062). -The same operation is investigated further in [Multi-Scale Context Aggregation -by Dilated Convolutions](http://arxiv.org/abs/1511.07122). Previous works -that effectively use atrous convolution in different ways are, among others, -[OverFeat: Integrated Recognition, Localization and Detection using -Convolutional Networks](http://arxiv.org/abs/1312.6229) and [Fast Image -Scanning with Deep Max-Pooling Convolutional Neural Networks] -(http://arxiv.org/abs/1302.1700). Atrous convolution is also closely related -to the so-called noble identities in multi-rate signal processing. - -There are many different ways to implement atrous convolution (see the refs -above). The implementation here reduces - - atrous_conv2d(value, filters, rate, padding=padding) - -to the following three operations: - - paddings = ... - net = space_to_batch(value, paddings, block_size=rate) - net = conv2d(net, filters, strides=[1, 1, 1, 1], padding="VALID") - crops = ... - net = batch_to_space(net, crops, block_size=rate) - -Advanced usage. Note the following optimization: A sequence of `atrous_conv2d` -operations with identical `rate` parameters, 'SAME' `padding`, and filters -with odd heights/ widths: - - net = atrous_conv2d(net, filters1, rate, padding="SAME") - net = atrous_conv2d(net, filters2, rate, padding="SAME") - ... - net = atrous_conv2d(net, filtersK, rate, padding="SAME") - -can be equivalently performed cheaper in terms of computation and memory as: - - pad = ... # padding so that the input dims are multiples of rate - net = space_to_batch(net, paddings=pad, block_size=rate) - net = conv2d(net, filters1, strides=[1, 1, 1, 1], padding="SAME") - net = conv2d(net, filters2, strides=[1, 1, 1, 1], padding="SAME") - ... - net = conv2d(net, filtersK, strides=[1, 1, 1, 1], padding="SAME") - net = batch_to_space(net, crops=pad, block_size=rate) - -because a pair of consecutive `space_to_batch` and `batch_to_space` ops with -the same `block_size` cancel out when their respective `paddings` and `crops` -inputs are identical. - -##### Args: - - -* `value`: A 4-D `Tensor` of type `float`. It needs to be in the default "NHWC" - format. Its shape is `[batch, in_height, in_width, in_channels]`. -* `filters`: A 4-D `Tensor` with the same type as `value` and shape - `[filter_height, filter_width, in_channels, out_channels]`. `filters`' - `in_channels` dimension must match that of `value`. Atrous convolution is - equivalent to standard convolution with upsampled filters with effective - height `filter_height + (filter_height - 1) * (rate - 1)` and effective - width `filter_width + (filter_width - 1) * (rate - 1)`, produced by - inserting `rate - 1` zeros along consecutive elements across the - `filters`' spatial dimensions. -* `rate`: A positive int32. The stride with which we sample input values across - the `height` and `width` dimensions. Equivalently, the rate by which we - upsample the filter values by inserting zeros across the `height` and - `width` dimensions. In the literature, the same parameter is sometimes - called `input stride` or `dilation`. -* `padding`: A string, either `'VALID'` or `'SAME'`. The padding algorithm. -* `name`: Optional name for the returned tensor. - -##### Returns: - - A `Tensor` with the same type as `value`. - -##### Raises: - - -* `ValueError`: If input/output depth does not match `filters`' shape, or if - padding is other than `'VALID'` or `'SAME'`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.conv3d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.conv3d.md new file mode 100644 index 0000000000..886744c595 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.conv3d.md @@ -0,0 +1,29 @@ +### `tf.nn.conv3d(input, filter, strides, padding, name=None)` {#conv3d} + +Computes a 3-D convolution given 5-D `input` and `filter` tensors. + +In signal processing, cross-correlation is a measure of similarity of +two waveforms as a function of a time-lag applied to one of them. This +is also known as a sliding dot product or sliding inner-product. + +Our Conv3D implements a form of cross-correlation. + +##### Args: + + +* `input`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`. + Shape `[batch, in_depth, in_height, in_width, in_channels]`. +* `filter`: A `Tensor`. Must have the same type as `input`. + Shape `[filter_depth, filter_height, filter_width, in_channels, out_channels]`. + `in_channels` must match between `input` and `filter`. +* `strides`: A list of `ints` that has length `>= 5`. + 1-D tensor of length 5. The stride of the sliding window for each + dimension of `input`. Must have `strides[0] = strides[4] = 1`. +* `padding`: A `string` from: `"SAME", "VALID"`. + The type of padding algorithm to use. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `input`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.depthwise_conv2d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.depthwise_conv2d.md deleted file mode 100644 index 7bacc20da8..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.depthwise_conv2d.md +++ /dev/null @@ -1,37 +0,0 @@ -### `tf.nn.depthwise_conv2d(input, filter, strides, padding, name=None)` {#depthwise_conv2d} - -Depthwise 2-D convolution. - -Given an input tensor of shape `[batch, in_height, in_width, in_channels]` -and a filter tensor of shape -`[filter_height, filter_width, in_channels, channel_multiplier]` -containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` -applies a different filter to each input channel (expanding from 1 channel -to `channel_multiplier` channels for each), then concatenates the results -together. The output has `in_channels * channel_multiplier` channels. - -In detail, - - output[b, i, j, k * channel_multiplier + q] = - sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] * - filter[di, dj, k, q] - -Must have `strides[0] = strides[3] = 1`. For the most common case of the -same horizontal and vertical strides, `strides = [1, stride, stride, 1]`. - -##### Args: - - -* `input`: 4-D with shape `[batch, in_height, in_width, in_channels]`. -* `filter`: 4-D with shape - `[filter_height, filter_width, in_channels, channel_multiplier]`. -* `strides`: 1-D of size 4. The stride of the sliding window for each - dimension of `input`. -* `padding`: A string, either `'VALID'` or `'SAME'`. The padding algorithm. -* `name`: A name for this operation (optional). - -##### Returns: - - A 4-D `Tensor` of shape - `[batch, out_height, out_width, in_channels * channel_multiplier].` - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.max_pool.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.max_pool.md new file mode 100644 index 0000000000..f17efa01de --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.max_pool.md @@ -0,0 +1,21 @@ +### `tf.nn.max_pool(value, ksize, strides, padding, data_format='NHWC', name=None)` {#max_pool} + +Performs the max pooling on the input. + +##### Args: + + +* `value`: A 4-D `Tensor` with shape `[batch, height, width, channels]` and + type `tf.float32`. +* `ksize`: A list of ints that has length >= 4. The size of the window for + each dimension of the input tensor. +* `strides`: A list of ints that has length >= 4. The stride of the sliding + window for each dimension of the input tensor. +* `padding`: A string, either `'VALID'` or `'SAME'`. The padding algorithm. +* `data_format`: A string. 'NHWC' and 'NCHW' are supported. +* `name`: Optional name for the operation. + +##### Returns: + + A `Tensor` with type `tf.float32`. The max pooled output tensor. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.max_pool_with_argmax.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.max_pool_with_argmax.md new file mode 100644 index 0000000000..0bf84c16d0 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.max_pool_with_argmax.md @@ -0,0 +1,30 @@ +### `tf.nn.max_pool_with_argmax(input, ksize, strides, padding, Targmax=None, name=None)` {#max_pool_with_argmax} + +Performs max pooling on the input and outputs both max values and indices. + +The indices in `argmax` are flattened, so that a maximum value at position +`[b, y, x, c]` becomes flattened index +`((b * height + y) * width + x) * channels + c`. + +##### Args: + + +* `input`: A `Tensor` of type `float32`. + 4-D with shape `[batch, height, width, channels]`. Input to pool over. +* `ksize`: A list of `ints` that has length `>= 4`. + The size of the window for each dimension of the input tensor. +* `strides`: A list of `ints` that has length `>= 4`. + The stride of the sliding window for each dimension of the + input tensor. +* `padding`: A `string` from: `"SAME", "VALID"`. + The type of padding algorithm to use. +* `Targmax`: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int64`. +* `name`: A name for the operation (optional). + +##### Returns: + + A tuple of `Tensor` objects (output, argmax). + +* `output`: A `Tensor` of type `float32`. The max pooled output tensor. +* `argmax`: A `Tensor` of type `Targmax`. 4-D. The flattened indices of the max values chosen for each output. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.moments.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.moments.md deleted file mode 100644 index 704bb5ba49..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.moments.md +++ /dev/null @@ -1,30 +0,0 @@ -### `tf.nn.moments(x, axes, shift=None, name=None, keep_dims=False)` {#moments} - -Calculate the mean and variance of `x`. - -The mean and variance are calculated by aggregating the contents of `x` -across `axes`. If `x` is 1-D and `axes = [0]` this is just the mean -and variance of a vector. - -When using these moments for batch normalization (see -`tf.nn.batch_normalization`): - * for so-called "global normalization", used with convolutional filters with - shape `[batch, height, width, depth]`, pass `axes=[0, 1, 2]`. - * for simple batch normalization pass `axes=[0]` (batch only). - -##### Args: - - -* `x`: A `Tensor`. -* `axes`: array of ints. Axes along which to compute mean and - variance. -* `shift`: A `Tensor` containing the value by which to shift the data for - numerical stability, or `None` if no shift is to be performed. A shift - close to the true mean provides the most numerically stable results. -* `keep_dims`: produce moments with the same dimensionality as the input. -* `name`: Name used to scope the operations that compute the moments. - -##### Returns: - - Two `Tensor` objects: `mean` and `variance`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.sampled_softmax_loss.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.sampled_softmax_loss.md new file mode 100644 index 0000000000..6d22f67352 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.sampled_softmax_loss.md @@ -0,0 +1,49 @@ +### `tf.nn.sampled_softmax_loss(weights, biases, inputs, labels, num_sampled, num_classes, num_true=1, sampled_values=None, remove_accidental_hits=True, partition_strategy='mod', name='sampled_softmax_loss')` {#sampled_softmax_loss} + +Computes and returns the sampled softmax training loss. + +This is a faster way to train a softmax classifier over a huge number of +classes. + +This operation is for training only. It is generally an underestimate of +the full softmax loss. + +At inference time, you can compute full softmax probabilities with the +expression `tf.nn.softmax(tf.matmul(inputs, tf.transpose(weights)) + biases)`. + +See our [Candidate Sampling Algorithms Reference] +(../../extras/candidate_sampling.pdf) + +Also see Section 3 of [Jean et al., 2014](http://arxiv.org/abs/1412.2007) +([pdf](http://arxiv.org/pdf/1412.2007.pdf)) for the math. + +##### Args: + + +* `weights`: A `Tensor` of shape `[num_classes, dim]`, or a list of `Tensor` + objects whose concatenation along dimension 0 has shape + [num_classes, dim]. The (possibly-sharded) class embeddings. +* `biases`: A `Tensor` of shape `[num_classes]`. The class biases. +* `inputs`: A `Tensor` of shape `[batch_size, dim]`. The forward + activations of the input network. +* `labels`: A `Tensor` of type `int64` and shape `[batch_size, + num_true]`. The target classes. Note that this format differs from + the `labels` argument of `nn.softmax_cross_entropy_with_logits`. +* `num_sampled`: An `int`. The number of classes to randomly sample per batch. +* `num_classes`: An `int`. The number of possible classes. +* `num_true`: An `int`. The number of target classes per training example. +* `sampled_values`: a tuple of (`sampled_candidates`, `true_expected_count`, + `sampled_expected_count`) returned by a `*_candidate_sampler` function. + (if None, we default to `log_uniform_candidate_sampler`) +* `remove_accidental_hits`: A `bool`. whether to remove "accidental hits" + where a sampled class equals one of the target classes. Default is + True. +* `partition_strategy`: A string specifying the partitioning strategy, relevant + if `len(weights) > 1`. Currently `"div"` and `"mod"` are supported. + Default is `"mod"`. See `tf.nn.embedding_lookup` for more details. +* `name`: A name for the operation (optional). + +##### Returns: + + A `batch_size` 1-D tensor of per-example sampled softmax losses. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.softmax.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.softmax.md new file mode 100644 index 0000000000..be31bb2093 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.softmax.md @@ -0,0 +1,19 @@ +### `tf.nn.softmax(logits, name=None)` {#softmax} + +Computes softmax activations. + +For each batch `i` and class `j` we have + + softmax[i, j] = exp(logits[i, j]) / sum(exp(logits[i])) + +##### Args: + + +* `logits`: A `Tensor`. Must be one of the following types: `float32`, `float64`. + 2-D with shape `[batch_size, num_classes]`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `logits`. Same shape as `logits`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.sufficient_statistics.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.sufficient_statistics.md new file mode 100644 index 0000000000..92cb5596e6 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.sufficient_statistics.md @@ -0,0 +1,27 @@ +### `tf.nn.sufficient_statistics(x, axes, shift=None, keep_dims=False, name=None)` {#sufficient_statistics} + +Calculate the sufficient statistics for the mean and variance of `x`. + +These sufficient statistics are computed using the one pass algorithm on +an input that's optionally shifted. See: +https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Computing_shifted_data + +##### Args: + + +* `x`: A `Tensor`. +* `axes`: Array of ints. Axes along which to compute mean and variance. +* `shift`: A `Tensor` containing the value by which to shift the data for + numerical stability, or `None` if no shift is to be performed. A shift + close to the true mean provides the most numerically stable results. +* `keep_dims`: produce statistics with the same dimensionality as the input. +* `name`: Name used to scope the operations that compute the sufficient stats. + +##### Returns: + + Four `Tensor` objects of the same type as `x`: + * the count (number of elements to average over). + * the (possibly shifted) sum of the elements in the array. + * the (possibly shifted) sum of squares of the elements in the array. + * the shift by which the mean must be corrected or None if `shift` is None. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.weighted_cross_entropy_with_logits.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.weighted_cross_entropy_with_logits.md deleted file mode 100644 index 697de67936..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.weighted_cross_entropy_with_logits.md +++ /dev/null @@ -1,52 +0,0 @@ -### `tf.nn.weighted_cross_entropy_with_logits(logits, targets, pos_weight, name=None)` {#weighted_cross_entropy_with_logits} - -Computes a weighted cross entropy. - -This is like `sigmoid_cross_entropy_with_logits()` except that `pos_weight`, -allows one to trade off recall and precision by up- or down-weighting the -cost of a positive error relative to a negative error. - -The usual cross-entropy cost is defined as: - - targets * -log(sigmoid(logits)) + (1 - targets) * -log(1 - sigmoid(logits)) - -The argument `pos_weight` is used as a multiplier for the positive targets: - - targets * -log(sigmoid(logits)) * pos_weight + - (1 - targets) * -log(1 - sigmoid(logits)) - -For brevity, let `x = logits`, `z = targets`, `q = pos_weight`. -The loss is: - - qz * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x)) - = qz * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x))) - = qz * log(1 + exp(-x)) + (1 - z) * (-log(exp(-x)) + log(1 + exp(-x))) - = qz * log(1 + exp(-x)) + (1 - z) * (x + log(1 + exp(-x)) - = (1 - z) * x + (qz + 1 - z) * log(1 + exp(-x)) - = (1 - z) * x + (1 + (q - 1) * z) * log(1 + exp(-x)) - -Setting `l = (1 + (q - 1) * z)`, to ensure stability and avoid overflow, -the implementation uses - - (1 - z) * x + l * (log(1 + exp(-abs(x))) + max(-x, 0)) - -`logits` and `targets` must have the same type and shape. - -##### Args: - - -* `logits`: A `Tensor` of type `float32` or `float64`. -* `targets`: A `Tensor` of the same type and shape as `logits`. -* `pos_weight`: A coefficient to use on the positive examples. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of the same shape as `logits` with the componentwise - weightedlogistic losses. - -##### Raises: - - -* `ValueError`: If `logits` and `targets` do not have the same shape. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.not_equal.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.not_equal.md deleted file mode 100644 index 9c18792223..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.not_equal.md +++ /dev/null @@ -1,15 +0,0 @@ -### `tf.not_equal(x, y, name=None)` {#not_equal} - -Returns the truth value of (x != y) element-wise. - -##### Args: - - -* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`, `complex64`, `quint8`, `qint8`, `qint32`, `string`, `bool`, `complex128`. -* `y`: A `Tensor`. Must have the same type as `x`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `bool`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.pad.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.pad.md new file mode 100644 index 0000000000..7fbf7442c7 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.pad.md @@ -0,0 +1,57 @@ +### `tf.pad(tensor, paddings, mode='CONSTANT', name=None)` {#pad} + +Pads a tensor. + +This operation pads a `tensor` according to the `paddings` you specify. +`paddings` is an integer tensor with shape `[n, 2]`, where n is the rank of +`tensor`. For each dimension D of `input`, `paddings[D, 0]` indicates how +many values to add before the contents of `tensor` in that dimension, and +`paddings[D, 1]` indicates how many values to add after the contents of +`tensor` in that dimension. If `mode` is "REFLECT" then both `paddings[D, 0]` +and `paddings[D, 1]` must be no greater than `tensor.dim_size(D) - 1`. If +`mode` is "SYMMETRIC" then both `paddings[D, 0]` and `paddings[D, 1]` must be +no greater than `tensor.dim_size(D)`. + +The padded size of each dimension D of the output is: + +`paddings[D, 0] + tensor.dim_size(D) + paddings[D, 1]` + +For example: + +```python +# 't' is [[1, 2, 3], [4, 5, 6]]. +# 'paddings' is [[1, 1,], [2, 2]]. +# rank of 't' is 2. +pad(t, paddings, "CONSTANT") ==> [[0, 0, 0, 0, 0, 0, 0], + [0, 0, 1, 2, 3, 0, 0], + [0, 0, 4, 5, 6, 0, 0], + [0, 0, 0, 0, 0, 0, 0]] + +pad(t, paddings, "REFLECT") ==> [[6, 5, 4, 5, 6, 5, 4], + [3, 2, 1, 2, 3, 2, 1], + [6, 5, 4, 5, 6, 5, 4], + [3, 2, 1, 2, 3, 2, 1]] + +pad(t, paddings, "SYMMETRIC") ==> [[2, 1, 1, 2, 3, 3, 2], + [2, 1, 1, 2, 3, 3, 2], + [5, 4, 4, 5, 6, 6, 5], + [5, 4, 4, 5, 6, 6, 5]] +``` + +##### Args: + + +* `tensor`: A `Tensor`. +* `paddings`: A `Tensor` of type `int32`. +* `mode`: One of "CONSTANT", "REFLECT", or "SYMMETRIC". +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `tensor`. + +##### Raises: + + +* `ValueError`: When mode is not one of "CONSTANT", "REFLECT", or "SYMMETRIC". + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.parse_example.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.parse_example.md deleted file mode 100644 index 2f2f511196..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.parse_example.md +++ /dev/null @@ -1,153 +0,0 @@ -### `tf.parse_example(serialized, features, name=None, example_names=None)` {#parse_example} - -Parses `Example` protos into a `dict` of tensors. - -Parses a number of serialized [`Example`] -(https://www.tensorflow.org/code/tensorflow/core/example/example.proto) -protos given in `serialized`. - -`example_names` may contain descriptive names for the corresponding serialized -protos. These may be useful for debugging purposes, but they have no effect on -the output. If not `None`, `example_names` must be the same length as `serialized`. - -This op parses serialized examples into a dictionary mapping keys to `Tensor` -and `SparseTensor` objects. `features` is a dict from keys to `VarLenFeature` -and `FixedLenFeature` objects. Each `VarLenFeature` is mapped to a -`SparseTensor`, and each `FixedLenFeature` is mapped to a `Tensor`. - -Each `VarLenFeature` maps to a `SparseTensor` of the specified type -representing a ragged matrix. Its indices are `[batch, index]` where `batch` -is the batch entry the value is from in `serialized`, and `index` is the -value's index in the list of values associated with that feature and example. - -Each `FixedLenFeature` `df` maps to a `Tensor` of the specified type (or -`tf.float32` if not specified) and shape `(serialized.size(),) + df.shape`. - -`FixedLenFeature` entries with a `default_value` are optional. With no default -value, we will fail if that `Feature` is missing from any example in -`serialized`. - -Examples: - -For example, if one expects a `tf.float32` sparse feature `ft` and three -serialized `Example`s are provided: - -``` -serialized = [ - features - { feature { key: "ft" value { float_list { value: [1.0, 2.0] } } } }, - features - { feature []}, - features - { feature { key: "ft" value { float_list { value: [3.0] } } } -] -``` - -then the output will look like: - -``` -{"ft": SparseTensor(indices=[[0, 0], [0, 1], [2, 0]], - values=[1.0, 2.0, 3.0], - shape=(3, 2)) } -``` - -Given two `Example` input protos in `serialized`: - -``` -[ - features { - feature { key: "kw" value { bytes_list { value: [ "knit", "big" ] } } } - feature { key: "gps" value { float_list { value: [] } } } - }, - features { - feature { key: "kw" value { bytes_list { value: [ "emmy" ] } } } - feature { key: "dank" value { int64_list { value: [ 42 ] } } } - feature { key: "gps" value { } } - } -] -``` - -And arguments - -``` -example_names: ["input0", "input1"], -features: { - "kw": VarLenFeature(tf.string), - "dank": VarLenFeature(tf.int64), - "gps": VarLenFeature(tf.float32), -} -``` - -Then the output is a dictionary: - -```python -{ - "kw": SparseTensor( - indices=[[0, 0], [0, 1], [1, 0]], - values=["knit", "big", "emmy"] - shape=[2, 2]), - "dank": SparseTensor( - indices=[[1, 0]], - values=[42], - shape=[2, 1]), - "gps": SparseTensor( - indices=[], - values=[], - shape=[2, 0]), -} -``` - -For dense results in two serialized `Example`s: - -``` -[ - features { - feature { key: "age" value { int64_list { value: [ 0 ] } } } - feature { key: "gender" value { bytes_list { value: [ "f" ] } } } - }, - features { - feature { key: "age" value { int64_list { value: [] } } } - feature { key: "gender" value { bytes_list { value: [ "f" ] } } } - } -] -``` - -We can use arguments: - -``` -example_names: ["input0", "input1"], -features: { - "age": FixedLenFeature([], dtype=tf.int64, default_value=-1), - "gender": FixedLenFeature([], dtype=tf.string), -} -``` - -And the expected output is: - -```python -{ - "age": [[0], [-1]], - "gender": [["f"], ["f"]], -} -``` - -##### Args: - - -* `serialized`: A vector (1-D Tensor) of strings, a batch of binary - serialized `Example` protos. -* `features`: A `dict` mapping feature keys to `FixedLenFeature` or - `VarLenFeature` values. -* `name`: A name for this operation (optional). -* `example_names`: A vector (1-D Tensor) of strings (optional), the names of - the serialized protos in the batch. - -##### Returns: - - A `dict` mapping feature keys to `Tensor` and `SparseTensor` values. - -##### Raises: - - -* `ValueError`: if any feature is invalid. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.placeholder.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.placeholder.md new file mode 100644 index 0000000000..28cdc11cce --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.placeholder.md @@ -0,0 +1,34 @@ +### `tf.placeholder(dtype, shape=None, name=None)` {#placeholder} + +Inserts a placeholder for a tensor that will be always fed. + +**Important**: This tensor will produce an error if evaluated. Its value must +be fed using the `feed_dict` optional argument to `Session.run()`, +`Tensor.eval()`, or `Operation.run()`. + +For example: + +```python +x = tf.placeholder(tf.float32, shape=(1024, 1024)) +y = tf.matmul(x, x) + +with tf.Session() as sess: + print(sess.run(y)) # ERROR: will fail because x was not fed. + + rand_array = np.random.rand(1024, 1024) + print(sess.run(y, feed_dict={x: rand_array})) # Will succeed. +``` + +##### Args: + + +* `dtype`: The type of elements in the tensor to be fed. +* `shape`: The shape of the tensor to be fed (optional). If the shape is not + specified, you can feed a tensor of any shape. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` that may be used as a handle for feeding a value, but not + evaluated directly. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.python_io.tf_record_iterator.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.python_io.tf_record_iterator.md new file mode 100644 index 0000000000..f5e90ea422 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.python_io.tf_record_iterator.md @@ -0,0 +1,18 @@ +### `tf.python_io.tf_record_iterator(path)` {#tf_record_iterator} + +An iterator that read the records from a TFRecords file. + +##### Args: + + +* `path`: The path to the TFRecords file. + +##### Yields: + + Strings. + +##### Raises: + + +* `IOError`: If `path` cannot be opened for reading. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.read_file.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.read_file.md new file mode 100644 index 0000000000..3c0ad3652a --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.read_file.md @@ -0,0 +1,14 @@ +### `tf.read_file(filename, name=None)` {#read_file} + +Reads and outputs the entire contents of the input filename. + +##### Args: + + +* `filename`: A `Tensor` of type `string`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `string`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.real.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.real.md new file mode 100644 index 0000000000..3be066f588 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.real.md @@ -0,0 +1,28 @@ +### `tf.real(input, name=None)` {#real} + +Returns the real part of a complex number. + +Given a tensor `input` of complex numbers, this operation returns a tensor of +type `float` or `double` that is the real part of each element in `input`. +All elements in `input` must be complex numbers of the form \(a + bj\), +where *a* is the real part returned by this operation and *b* is the +imaginary part. + +For example: + +``` +# tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j] +tf.real(input) ==> [-2.25, 3.25] +``` + +##### Args: + + +* `input`: A `Tensor`. Must be one of the following types: `complex64`, + `complex128`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `float` or `double`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.reshape.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.reshape.md deleted file mode 100644 index 057b29e91f..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.reshape.md +++ /dev/null @@ -1,72 +0,0 @@ -### `tf.reshape(tensor, shape, name=None)` {#reshape} - -Reshapes a tensor. - -Given `tensor`, this operation returns a tensor that has the same values -as `tensor` with shape `shape`. - -If one component of `shape` is the special value -1, the size of that dimension -is computed so that the total size remains constant. In particular, a `shape` -of `[-1]` flattens into 1-D. At most one component of `shape` can be -1. - -If `shape` is 1-D or higher, then the operation returns a tensor with shape -`shape` filled with the values of `tensor`. In this case, the number of elements -implied by `shape` must be the same as the number of elements in `tensor`. - -For example: - -```prettyprint -# tensor 't' is [1, 2, 3, 4, 5, 6, 7, 8, 9] -# tensor 't' has shape [9] -reshape(t, [3, 3]) ==> [[1, 2, 3], - [4, 5, 6], - [7, 8, 9]] - -# tensor 't' is [[[1, 1], [2, 2]], -# [[3, 3], [4, 4]]] -# tensor 't' has shape [2, 2, 2] -reshape(t, [2, 4]) ==> [[1, 1, 2, 2], - [3, 3, 4, 4]] - -# tensor 't' is [[[1, 1, 1], -# [2, 2, 2]], -# [[3, 3, 3], -# [4, 4, 4]], -# [[5, 5, 5], -# [6, 6, 6]]] -# tensor 't' has shape [3, 2, 3] -# pass '[-1]' to flatten 't' -reshape(t, [-1]) ==> [1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6] - -# -1 can also be used to infer the shape - -# -1 is inferred to be 9: -reshape(t, [2, -1]) ==> [[1, 1, 1, 2, 2, 2, 3, 3, 3], - [4, 4, 4, 5, 5, 5, 6, 6, 6]] -# -1 is inferred to be 2: -reshape(t, [-1, 9]) ==> [[1, 1, 1, 2, 2, 2, 3, 3, 3], - [4, 4, 4, 5, 5, 5, 6, 6, 6]] -# -1 is inferred to be 3: -reshape(t, [ 2, -1, 3]) ==> [[[1, 1, 1], - [2, 2, 2], - [3, 3, 3]], - [[4, 4, 4], - [5, 5, 5], - [6, 6, 6]]] - -# tensor 't' is [7] -# shape `[]` reshapes to a scalar -reshape(t, []) ==> 7 -``` - -##### Args: - - -* `tensor`: A `Tensor`. -* `shape`: A `Tensor` of type `int32`. Defines the shape of the output tensor. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `tensor`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.scatter_update.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.scatter_update.md new file mode 100644 index 0000000000..f865b8e9e8 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.scatter_update.md @@ -0,0 +1,46 @@ +### `tf.scatter_update(ref, indices, updates, use_locking=None, name=None)` {#scatter_update} + +Applies sparse updates to a variable reference. + +This operation computes + + # Scalar indices + ref[indices, ...] = updates[...] + + # Vector indices (for each i) + ref[indices[i], ...] = updates[i, ...] + + # High rank indices (for each i, ..., j) + ref[indices[i, ..., j], ...] = updates[i, ..., j, ...] + +This operation outputs `ref` after the update is done. +This makes it easier to chain operations that need to use the reset value. + +If values in `ref` is to be updated more than once, because there are +duplicate entires in `indices`, the order at which the updates happen +for each value is undefined. + +Requires `updates.shape = indices.shape + ref.shape[1:]`. + +
+ +
+ +##### Args: + + +* `ref`: A mutable `Tensor`. Should be from a `Variable` node. +* `indices`: A `Tensor`. Must be one of the following types: `int32`, `int64`. + A tensor of indices into the first dimension of `ref`. +* `updates`: A `Tensor`. Must have the same type as `ref`. + A tensor of updated values to store in `ref`. +* `use_locking`: An optional `bool`. Defaults to `True`. + If True, the assignment will be protected by a lock; + otherwise the behavior is undefined, but may exhibit less contention. +* `name`: A name for the operation (optional). + +##### Returns: + + Same as `ref`. Returned as a convenience for operations that want + to use the updated values after the update is done. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.segment_prod.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.segment_prod.md deleted file mode 100644 index c9ed2759cf..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.segment_prod.md +++ /dev/null @@ -1,31 +0,0 @@ -### `tf.segment_prod(data, segment_ids, name=None)` {#segment_prod} - -Computes the product along segments of a tensor. - -Read [the section on -Segmentation](../../api_docs/python/math_ops.md#segmentation) for an explanation -of segments. - -Computes a tensor such that -\\(output_i = \prod_j data_j\\) where the product is over `j` such -that `segment_ids[j] == i`. - -
- -
- -##### Args: - - -* `data`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`. -* `segment_ids`: A `Tensor`. Must be one of the following types: `int32`, `int64`. - A 1-D tensor whose rank is equal to the rank of `data`'s - first dimension. Values should be sorted and can be repeated. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `data`. - Has same shape as data, except for dimension 0 which - has size `k`, the number of segments. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.select.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.select.md new file mode 100644 index 0000000000..b77c9612e8 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.select.md @@ -0,0 +1,56 @@ +### `tf.select(condition, t, e, name=None)` {#select} + +Selects elements from `t` or `e`, depending on `condition`. + +The `t`, and `e` tensors must all have the same shape, +and the output will also have that shape. The `condition` tensor +must be a scalar if `t` and `e` are scalars. If `t` and `e` are vectors +or higher rank, then `condition` must be either a vector with size +matching the first dimension of `t`, or must have the same shape as `t`. + +The `condition` tensor acts as a mask that chooses, based on the value at each +element, whether the corresponding element / row in the output should be +taken from `t` (if true) or `e` (if false). + +If `condition` is a vector and `t` and `e` are higher rank matrices, then +it chooses which row (outer dimension) to copy from `t` and `e`. +If `condition` has the same shape as `t` and `e`, then it chooses which +element to copy from `t` and `e`. + +For example: + +```prettyprint +# 'condition' tensor is [[True, False] +# [False, True]] +# 't' is [[1, 2], +# [3, 4]] +# 'e' is [[5, 6], +# [7, 8]] +select(condition, t, e) ==> [[1, 6], + [7, 4]] + + +# 'condition' tensor is [True, False] +# 't' is [[1, 2], +# [3, 4]] +# 'e' is [[5, 6], +# [7, 8]] +select(condition, t, e) ==> [[1, 2], + [7, 8]] + +``` + +##### Args: + + +* `condition`: A `Tensor` of type `bool`. +* `t`: A `Tensor` which may have the same shape as `condition`. + If `condition` is rank 1, `t` may have higher rank, + but its first dimension must match the size of `condition`. +* `e`: A `Tensor` with the same type and shape as `t`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` with the same type and shape as `t` and `e`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.sin.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.sin.md deleted file mode 100644 index aeeaf0c7e6..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.sin.md +++ /dev/null @@ -1,14 +0,0 @@ -### `tf.sin(x, name=None)` {#sin} - -Computes sin of x element-wise. - -##### Args: - - -* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `x`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.slice.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.slice.md deleted file mode 100644 index 6da47df0b0..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.slice.md +++ /dev/null @@ -1,47 +0,0 @@ -### `tf.slice(input_, begin, size, name=None)` {#slice} - -Extracts a slice from a tensor. - -This operation extracts a slice of size `size` from a tensor `input` starting -at the location specified by `begin`. The slice `size` is represented as a -tensor shape, where `size[i]` is the number of elements of the 'i'th dimension -of `input` that you want to slice. The starting location (`begin`) for the -slice is represented as an offset in each dimension of `input`. In other -words, `begin[i]` is the offset into the 'i'th dimension of `input` that you -want to slice from. - -`begin` is zero-based; `size` is one-based. If `size[i]` is -1, -all remaining elements in dimension i are included in the -slice. In other words, this is equivalent to setting: - -`size[i] = input.dim_size(i) - begin[i]` - -This operation requires that: - -`0 <= begin[i] <= begin[i] + size[i] <= Di for i in [0, n]` - -For example: - -``` -# 'input' is [[[1, 1, 1], [2, 2, 2]], -# [[3, 3, 3], [4, 4, 4]], -# [[5, 5, 5], [6, 6, 6]]] -tf.slice(input, [1, 0, 0], [1, 1, 3]) ==> [[[3, 3, 3]]] -tf.slice(input, [1, 0, 0], [1, 2, 3]) ==> [[[3, 3, 3], - [4, 4, 4]]] -tf.slice(input, [1, 0, 0], [2, 1, 3]) ==> [[[3, 3, 3]], - [[5, 5, 5]]] -``` - -##### Args: - - -* `input_`: A `Tensor`. -* `begin`: An `int32` or `int64` `Tensor`. -* `size`: An `int32` or `int64` `Tensor`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` the same type as `input`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.sparse_concat.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.sparse_concat.md new file mode 100644 index 0000000000..8d05472e34 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.sparse_concat.md @@ -0,0 +1,100 @@ +### `tf.sparse_concat(concat_dim, sp_inputs, name=None, expand_nonconcat_dim=False)` {#sparse_concat} + +Concatenates a list of `SparseTensor` along the specified dimension. + +Concatenation is with respect to the dense versions of each sparse input. +It is assumed that each inputs is a `SparseTensor` whose elements are ordered +along increasing dimension number. + +If expand_nonconcat_dim is False, all inputs' shapes must match, except for +the concat dimension. If expand_nonconcat_dim is True, then inputs' shapes are +allowd to vary among all inputs. + +The `indices`, `values`, and `shapes` lists must have the same length. + +If expand_nonconcat_dim is False, then the output shape is identical to the +inputs', except along the concat dimension, where it is the sum of the inputs' +sizes along that dimension. + +If expand_nonconcat_dim is True, then the output shape along the non-concat +dimensions will be expand to be the largest among all inputs, and it is the +sum of the inputs sizes along the concat dimension. + +The output elements will be resorted to preserve the sort order along +increasing dimension number. + +This op runs in `O(M log M)` time, where `M` is the total number of non-empty +values across all inputs. This is due to the need for an internal sort in +order to concatenate efficiently across an arbitrary dimension. + +For example, if `concat_dim = 1` and the inputs are + + sp_inputs[0]: shape = [2, 3] + [0, 2]: "a" + [1, 0]: "b" + [1, 1]: "c" + + sp_inputs[1]: shape = [2, 4] + [0, 1]: "d" + [0, 2]: "e" + +then the output will be + + shape = [2, 7] + [0, 2]: "a" + [0, 4]: "d" + [0, 5]: "e" + [1, 0]: "b" + [1, 1]: "c" + +Graphically this is equivalent to doing + + [ a] concat [ d e ] = [ a d e ] + [b c ] [ ] [b c ] + +Another example, if 'concat_dim = 1' and the inputs are + + sp_inputs[0]: shape = [3, 3] + [0, 2]: "a" + [1, 0]: "b" + [2, 1]: "c" + + sp_inputs[1]: shape = [2, 4] + [0, 1]: "d" + [0, 2]: "e" + +if expand_nonconcat_dim = False, this will result in an error. But if +expand_nonconcat_dim = True, this will result in: + + shape = [3, 7] + [0, 2]: "a" + [0, 4]: "d" + [0, 5]: "e" + [1, 0]: "b" + [2, 1]: "c" + +Graphically this is equivalent to doing + + [ a] concat [ d e ] = [ a d e ] + [b ] [ ] [b ] + [ c ] [ c ] + + +##### Args: + + +* `concat_dim`: Dimension to concatenate along. +* `sp_inputs`: List of `SparseTensor` to concatenate. +* `name`: A name prefix for the returned tensors (optional). +* `expand_nonconcat_dim`: Whether to allow the expansion in the non-concat + dimensions. Defaulted to False. + +##### Returns: + + A `SparseTensor` with the concatenated output. + +##### Raises: + + +* `TypeError`: If `sp_inputs` is not a list of `SparseTensor`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.sparse_mask.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.sparse_mask.md new file mode 100644 index 0000000000..d2fa38733b --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.sparse_mask.md @@ -0,0 +1,39 @@ +### `tf.sparse_mask(a, mask_indices, name=None)` {#sparse_mask} + +Masks elements of `IndexedSlices`. + +Given an `IndexedSlices` instance `a`, returns another `IndexedSlices` that +contains a subset of the slices of `a`. Only the slices at indices specified +in `mask_indices` are returned. + +This is useful when you need to extract a subset of slices in an +`IndexedSlices` object. + +For example: + +```python +# `a` contains slices at indices [12, 26, 37, 45] from a large tensor +# with shape [1000, 10] +a.indices => [12, 26, 37, 45] +tf.shape(a.values) => [4, 10] + +# `b` will be the subset of `a` slices at its second and third indices, so +# we want to mask of its first and last indices (which are at absolute +# indices 12, 45) +b = tf.sparse_mask(a, [12, 45]) + +b.indices => [26, 37] +tf.shape(b.values) => [2, 10] + +``` + +##### Args: + + * `a`: An `IndexedSlices` instance. + * `mask_indices`: Indices of elements to mask. + * `name`: A name for the operation (optional). + +##### Returns: + + The masked `IndexedSlices` instance. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.sparse_to_indicator.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.sparse_to_indicator.md deleted file mode 100644 index 8ee455be32..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.sparse_to_indicator.md +++ /dev/null @@ -1,52 +0,0 @@ -### `tf.sparse_to_indicator(sp_input, vocab_size, name=None)` {#sparse_to_indicator} - -Converts a `SparseTensor` of ids into a dense bool indicator tensor. - -The last dimension of `sp_input.indices` is discarded and replaced with -the values of `sp_input`. If `sp_input.shape = [D0, D1, ..., Dn, K]`, then -`output.shape = [D0, D1, ..., Dn, vocab_size]`, where - - output[d_0, d_1, ..., d_n, sp_input[d_0, d_1, ..., d_n, k]] = True - -and False elsewhere in `output`. - -For example, if `sp_input.shape = [2, 3, 4]` with non-empty values: - - [0, 0, 0]: 0 - [0, 1, 0]: 10 - [1, 0, 3]: 103 - [1, 1, 2]: 150 - [1, 1, 3]: 149 - [1, 1, 4]: 150 - [1, 2, 1]: 121 - -and `vocab_size = 200`, then the output will be a `[2, 3, 200]` dense bool -tensor with False everywhere except at positions - - (0, 0, 0), (0, 1, 10), (1, 0, 103), (1, 1, 149), (1, 1, 150), - (1, 2, 121). - -Note that repeats are allowed in the input SparseTensor. -This op is useful for converting `SparseTensor`s into dense formats for -compatibility with ops that expect dense tensors. - -The input `SparseTensor` must be in row-major order. - -##### Args: - - -* `sp_input`: A `SparseTensor` with `values` property of type `int32` or - `int64`. -* `vocab_size`: A scalar int64 Tensor (or Python int) containing the new size - of the last dimension, `all(0 <= sp_input.values < vocab_size)`. -* `name`: A name prefix for the returned tensors (optional) - -##### Returns: - - A dense bool indicator tensor representing the indices with specified value. - -##### Raises: - - -* `TypeError`: If `sp_input` is not a `SparseTensor`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.string_to_hash_bucket_strong.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.string_to_hash_bucket_strong.md deleted file mode 100644 index 67cf3b6fd9..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.string_to_hash_bucket_strong.md +++ /dev/null @@ -1,30 +0,0 @@ -### `tf.string_to_hash_bucket_strong(input, num_buckets, key, name=None)` {#string_to_hash_bucket_strong} - -Converts each string in the input Tensor to its hash mod by a number of buckets. - -The hash function is deterministic on the content of the string within the -process. The hash function is a keyed hash function, where attribute `key` -defines the key of the hash function. `key` is an array of 2 elements. - -A strong hash is important when inputs may be malicious, e.g. URLs with -additional components. Adversaries could try to make their inputs hash to the -same bucket for a denial-of-service attack or to skew the results. A strong -hash prevents this by making it dificult, if not infeasible, to compute inputs -that hash to the same bucket. This comes at a cost of roughly 4x higher compute -time than tf.string_to_hash_bucket_fast. - -##### Args: - - -* `input`: A `Tensor` of type `string`. The strings to assign a hash bucket. -* `num_buckets`: An `int` that is `>= 1`. The number of buckets. -* `key`: A list of `ints`. - The key for the keyed hash function passed as a list of two uint64 - elements. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `int64`. - A Tensor of the same shape as the input `string_tensor`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.sub.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.sub.md deleted file mode 100644 index 2d1da0f0b9..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.sub.md +++ /dev/null @@ -1,15 +0,0 @@ -### `tf.sub(x, y, name=None)` {#sub} - -Returns x - y element-wise. - -##### Args: - - -* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. -* `y`: A `Tensor`. Must have the same type as `x`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `x`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.test.compute_gradient.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.test.compute_gradient.md deleted file mode 100644 index 19b302d466..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.test.compute_gradient.md +++ /dev/null @@ -1,40 +0,0 @@ -### `tf.test.compute_gradient(x, x_shape, y, y_shape, x_init_value=None, delta=0.001, init_targets=None)` {#compute_gradient} - -Computes and returns the theoretical and numerical Jacobian. - -If `x` or `y` is complex, the Jacobian will still be real but the -corresponding Jacobian dimension(s) will be twice as large. This is required -even if both input and output is complex since TensorFlow graphs are not -necessarily holomorphic, and may have gradients not expressible as complex -numbers. For example, if `x` is complex with shape `[m]` and `y` is complex -with shape `[n]`, each Jacobian `J` will have shape `[m * 2, n * 2]` with - - J[:m, :n] = d(Re y)/d(Re x) - J[:m, n:] = d(Im y)/d(Re x) - J[m:, :n] = d(Re y)/d(Im x) - J[m:, n:] = d(Im y)/d(Im x) - -##### Args: - - -* `x`: a tensor or list of tensors -* `x_shape`: the dimensions of x as a tuple or an array of ints. If x is a list, - then this is the list of shapes. - -* `y`: a tensor -* `y_shape`: the dimensions of y as a tuple or an array of ints. -* `x_init_value`: (optional) a numpy array of the same shape as "x" - representing the initial value of x. If x is a list, this should be a list - of numpy arrays. If this is none, the function will pick a random tensor - as the initial value. -* `delta`: (optional) the amount of perturbation. -* `init_targets`: list of targets to run to initialize model params. - TODO(mrry): remove this argument. - -##### Returns: - - Two 2-d numpy arrays representing the theoretical and numerical - Jacobian for dy/dx. Each has "x_size" rows and "y_size" columns - where "x_size" is the number of elements in x and "y_size" is the - number of elements in y. If x is a list, returns a list of two numpy arrays. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.to_int32.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.to_int32.md new file mode 100644 index 0000000000..fcc9db61cc --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.to_int32.md @@ -0,0 +1,19 @@ +### `tf.to_int32(x, name='ToInt32')` {#to_int32} + +Casts a tensor to type `int32`. + +##### Args: + + +* `x`: A `Tensor` or `SparseTensor`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` or `SparseTensor` with same shape as `x` with type `int32`. + +##### Raises: + + +* `TypeError`: If `x` cannot be cast to the `int32`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.trace.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.trace.md new file mode 100644 index 0000000000..3b1e71fda1 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.trace.md @@ -0,0 +1,29 @@ +### `tf.trace(x, name=None)` {#trace} + +Compute the trace of a tensor `x`. + +`trace(x)` returns the sum of along the diagonal. + +For example: + +```python +# 'x' is [[1, 1], +# [1, 1]] +tf.trace(x) ==> 2 + +# 'x' is [[1,2,3], +# [4,5,6], +# [7,8,9]] +tf.trace(x) ==> 15 +``` + +##### Args: + + +* `x`: 2-D tensor. +* `name`: A name for the operation (optional). + +##### Returns: + + The trace of input tensor. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.ClusterSpec.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.ClusterSpec.md new file mode 100644 index 0000000000..c695781a86 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.ClusterSpec.md @@ -0,0 +1,86 @@ +Represents a cluster as a set of "tasks", organized into "jobs". + +A `tf.train.ClusterSpec` represents the set of processes that +participate in a distributed TensorFlow computation. Every +[`tf.train.Server`](#Server) is constructed in a particular cluster. + +To create a cluster with two jobs and five tasks, you specify the +mapping from job names to lists of network addresses (typically +hostname-port pairs). + +``` +cluster = tf.train.ClusterSpec({"worker": ["worker0.example.com:2222", + "worker1.example.com:2222", + "worker2.example.com:2222"], + "ps": ["ps0.example.com:2222", + "ps1.example.com:2222"]}) +``` + +- - - + +#### `tf.train.ClusterSpec.as_cluster_def()` {#ClusterSpec.as_cluster_def} + +Returns a `tf.train.ClusterDef` protocol buffer based on this cluster. + + +- - - + +#### `tf.train.ClusterSpec.as_dict()` {#ClusterSpec.as_dict} + +Returns a dictionary from job names to lists of network addresses. + + + +#### Other Methods +- - - + +#### `tf.train.ClusterSpec.__init__(cluster)` {#ClusterSpec.__init__} + +Creates a `ClusterSpec`. + +##### Args: + + +* `cluster`: A dictionary mapping one or more job names to lists of network + addresses, or a `tf.train.ClusterDef` protocol buffer. + +##### Raises: + + +* `TypeError`: If `cluster` is not a dictionary mapping strings to lists + of strings, and not a `tf.train.ClusterDef` protobuf. + + +- - - + +#### `tf.train.ClusterSpec.job_tasks(job_name)` {#ClusterSpec.job_tasks} + +Returns a list of tasks in the given job. + +##### Args: + + +* `job_name`: The string name of a job in this cluster. + +##### Returns: + + A list of strings, corresponding to the network addresses of tasks in + the given job, ordered by task index. + +##### Raises: + + +* `ValueError`: If `job_name` does not name a job in this cluster. + + +- - - + +#### `tf.train.ClusterSpec.jobs` {#ClusterSpec.jobs} + +Returns a list of job names in this cluster. + +##### Returns: + + A list of strings, corresponding to the names of jobs in this cluster. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.SessionManager.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.SessionManager.md deleted file mode 100644 index 8bebb8bd29..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.SessionManager.md +++ /dev/null @@ -1,187 +0,0 @@ -Training helper that restores from checkpoint and creates session. - -This class is a small wrapper that takes care of session creation and -checkpoint recovery. It also provides functions that to facilitate -coordination among multiple training threads or processes. - -* Checkpointing trained variables as the training progresses. -* Initializing variables on startup, restoring them from the most recent - checkpoint after a crash, or wait for checkpoints to become available. - -### Usage: - -```python -with tf.Graph().as_default(): - ...add operations to the graph... - # Create a SessionManager that will checkpoint the model in '/tmp/mydir'. - sm = SessionManager() - sess = sm.prepare_session(master, init_op, saver, checkpoint_dir) - # Use the session to train the graph. - while True: - sess.run() -``` - -`prepare_session()` initializes or restores a model. It requires `init_op` -and `saver` as an argument. - -A second process could wait for the model to be ready by doing the following: - -```python -with tf.Graph().as_default(): - ...add operations to the graph... - # Create a SessionManager that will wait for the model to become ready. - sm = SessionManager() - sess = sm.wait_for_session(master) - # Use the session to train the graph. - while True: - sess.run() -``` - -`wait_for_session()` waits for a model to be initialized by other processes. -- - - - -#### `tf.train.SessionManager.__init__(local_init_op=None, ready_op=None, graph=None, recovery_wait_secs=30)` {#SessionManager.__init__} - -Creates a SessionManager. - -The `local_init_op` is an `Operation` that is run always after a new session -was created. If `None`, this step is skipped. - -The `ready_op` is an `Operation` used to check if the model is ready. The -model is considered ready if that operation returns an empty string tensor. -If the operation returns non empty string tensor, the elements are -concatenated and used to indicate to the user why the model is not ready. - -If `ready_op` is `None`, the model is not checked for readiness. - -`recovery_wait_secs` is the number of seconds between checks that -the model is ready. It is used by processes to wait for a model to -be initialized or restored. Defaults to 30 seconds. - -##### Args: - - -* `local_init_op`: An `Operation` run immediately after session creation. - Usually used to initialize tables and local variables. -* `ready_op`: An `Operation` to check if the model is initialized. -* `graph`: The `Graph` that the model will use. -* `recovery_wait_secs`: Seconds between checks for the model to be ready. - - -- - - - -#### `tf.train.SessionManager.prepare_session(master, init_op=None, saver=None, checkpoint_dir=None, wait_for_checkpoint=False, max_wait_secs=7200, config=None, init_feed_dict=None, init_fn=None)` {#SessionManager.prepare_session} - -Creates a `Session`. Makes sure the model is ready to be used. - -Creates a `Session` on 'master'. If a `saver` object is passed in, and -`checkpoint_dir` points to a directory containing valid checkpoint -files, then it will try to recover the model from checkpoint. If -no checkpoint files are available, and `wait_for_checkpoint` is -`True`, then the process would check every `recovery_wait_secs`, -up to `max_wait_secs`, for recovery to succeed. - -If the model cannot be recovered successfully then it is initialized by -either running the provided `init_op`, or calling the provided `init_fn`. -It is an error if the model cannot be recovered and neither an `init_op` -or an `init_fn` are passed. - -This is a convenient function for the following, with a few error checks -added: - -```python -sess, initialized = self.recover_session(master) -if not initialized: - if init_op: - sess.run(init_op, feed_dict=init_feed_dict) - if init_fn; - init_fn(sess) -return sess -``` - -##### Args: - - -* `master`: `String` representation of the TensorFlow master to use. -* `init_op`: Optional `Operation` used to initialize the model. -* `saver`: A `Saver` object used to restore a model. -* `checkpoint_dir`: Path to the checkpoint files. -* `wait_for_checkpoint`: Whether to wait for checkpoint to become available. -* `max_wait_secs`: Maximum time to wait for checkpoints to become available. -* `config`: Optional `ConfigProto` proto used to configure the session. -* `init_feed_dict`: Optional dictionary that maps `Tensor` objects to feed - values. This feed dictionary is passed to the session `run()` call when - running the init op. -* `init_fn`: Optional callable used to initialize the model. Called after the - optional `init_op` is called. The callable must accept one argument, - the session being initialized. - -##### Returns: - - A `Session` object that can be used to drive the model. - -##### Raises: - - -* `RuntimeError`: If the model cannot be initialized or recovered. - - -- - - - -#### `tf.train.SessionManager.recover_session(master, saver=None, checkpoint_dir=None, wait_for_checkpoint=False, max_wait_secs=7200, config=None)` {#SessionManager.recover_session} - -Creates a `Session`, recovering if possible. - -Creates a new session on 'master'. If the session is not initialized -and can be recovered from a checkpoint, recover it. - -##### Args: - - -* `master`: `String` representation of the TensorFlow master to use. -* `saver`: A `Saver` object used to restore a model. -* `checkpoint_dir`: Path to the checkpoint files. -* `wait_for_checkpoint`: Whether to wait for checkpoint to become available. -* `max_wait_secs`: Maximum time to wait for checkpoints to become available. -* `config`: Optional `ConfigProto` proto used to configure the session. - -##### Returns: - - A pair (sess, initialized) where 'initialized' is `True` if - the session could be recovered, `False` otherwise. - - -- - - - -#### `tf.train.SessionManager.wait_for_session(master, config=None, max_wait_secs=inf)` {#SessionManager.wait_for_session} - -Creates a new `Session` and waits for model to be ready. - -Creates a new `Session` on 'master'. Waits for the model to be -initialized or recovered from a checkpoint. It's expected that -another thread or process will make the model ready, and that this -is intended to be used by threads/processes that participate in a -distributed training configuration where a different thread/process -is responsible for initializing or recovering the model being trained. - -NB: The amount of time this method waits for the session is bounded -by max_wait_secs. By default, this function will wait indefinitely. - -##### Args: - - -* `master`: `String` representation of the TensorFlow master to use. -* `config`: Optional ConfigProto proto used to configure the session. -* `max_wait_secs`: Maximum time to wait for the session to become available. - -##### Returns: - - A `Session`. May be None if the operation exceeds the timeout - specified by config.operation_timeout_in_ms. - -##### Raises: - - tf.DeadlineExceededError: if the session is not available after - max_wait_secs. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.Supervisor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.Supervisor.md new file mode 100644 index 0000000000..b3d17eac2d --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.Supervisor.md @@ -0,0 +1,845 @@ +A training helper that checkpoints models and computes summaries. + +The Supervisor is a small wrapper around a `Coordinator`, a `Saver`, +and a `SessionManager` that takes care of common needs of Tensorflow +training programs. + +#### Use for a single program + +```python +with tf.Graph().as_default(): + ...add operations to the graph... + # Create a Supervisor that will checkpoint the model in '/tmp/mydir'. + sv = Supervisor(logdir='/tmp/mydir') + # Get a Tensorflow session managed by the supervisor. + with sv.managed_session(FLAGS.master) as sess: + # Use the session to train the graph. + while not sv.should_stop(): + sess.run() +``` + +Within the `with sv.managed_session()` block all variables in the graph have +been initialized. In addition, a few services have been started to +checkpoint the model and add summaries to the event log. + +If the program crashes and is restarted, the managed session automatically +reinitialize variables from the most recent checkpoint. + +The supervisor is notified of any exception raised by one of the services. +After an exception is raised, `should_stop()` returns `True`. In that case +the training loop should also stop. This is why the training loop has to +check for `sv.should_stop()`. + +Exceptions that indicate that the training inputs have been exhausted, +`tf.errors.OutOfRangeError`, also cause `sv.should_stop()` to return `True` +but are not re-raised from the `with` block: they indicate a normal +termination. + +#### Use for multiple replicas + +To train with replicas you deploy the same program in a `Cluster`. +One of the tasks must be identified as the *chief*: the task that handles +initialization, checkpoints, summaries, and recovery. The other tasks +depend on the *chief* for these services. + +The only change you have to do to the single program code is to indicate +if the program is running as the *chief*. + +```python +# Choose a task as the chief. This could be based on server_def.task_index, +# or job_def.name, or job_def.tasks. It's entirely up to the end user. +# But there can be only one *chief*. +is_chief = (server_def.task_index == 0) +server = tf.train.Server(server_def) + +with tf.Graph().as_default(): + ...add operations to the graph... + # Create a Supervisor that uses log directory on a shared file system. + # Indicate if you are the 'chief' + sv = Supervisor(logdir='/shared_directory/...', is_chief=is_chief) + # Get a Session in a TensorFlow server on the cluster. + with sv.managed_session(server.target) as sess: + # Use the session to train the graph. + while not sv.should_stop(): + sess.run() +``` + +In the *chief* task, the `Supervisor` works exactly as in the first example +above. In the other tasks `sv.managed_session()` waits for the Model to have +been intialized before returning a session to the training code. The +non-chief tasks depend on the chief taks for initializing the model. + +If one of the tasks crashes and restarts, `managed_session()` +checks if the Model is initialized. If yes, it just creates a session and +returns it to the training code that proceeds normally. If the model needs +to be initialized, the chief task takes care of reinitializing it; the other +tasks just wait for the model to have been initialized. + +NOTE: This modified program still works fine as a single program. +The single program marks itself as the chief. + +#### What `master` string to use + +Whether you are running on your machine or in the cluster you can use the +following values for the --master flag: + +* Specifying `''` requests an in-process session that does not use RPC. + +* Specifying `'local'` requests a session that uses the RPC-based + "Master interface" to run TensorFlow programs. See + [`tf.train.Server.create_local_server()`](#Server.create_local_server) for + details. + +* Specifying `'grpc://hostname:port'` requests a session that uses + the RPC interface to a specific , and also allows the in-process + master to access remote tensorflow workers. Often, it is + appropriate to pass `server.target` (for some `tf.train.Server` + named `server). + +#### Advanced use + +##### Launching additional services + +`managed_session()` launches the Checkpoint and Summary services (threads). +If you need more services to run you can simply launch them in the block +controlled by `managed_session()`. + +Example: Start a thread to print losses. We want this thread to run +every 60 seconds, so we launch it with `sv.loop()`. + + ```python + ... + sv = Supervisor(logdir='/tmp/mydir') + with sv.managed_session(FLAGS.master) as sess: + sv.loop(60, print_loss, (sess)) + while not sv.should_stop(): + sess.run(my_train_op) + ``` + +##### Launching fewer services + +`managed_session()` launches the "summary" and "checkpoint" threads which use +either the optionally `summary_op` and `saver` passed to the constructor, or +default ones created automatically by the supervisor. If you want to run +your own summary and checkpointing logic, disable these services by passing +`None` to the `summary_op` and `saver` parameters. + +Example: Create summaries manually every 100 steps in the chief. + + ```python + # Create a Supervisor with no automatic summaries. + sv = Supervisor(logdir='/tmp/mydir', is_chief=is_chief, summary_op=None) + # As summary_op was None, managed_session() does not start the + # summary thread. + with sv.managed_session(FLAGS.master) as sess: + for step in xrange(1000000): + if sv.should_stop(): + break + if is_chief and step % 100 == 0: + # Create the summary every 100 chief steps. + sv.summary_computed(sess, sess.run(my_summary_op)) + else: + # Train normally + sess.run(my_train_op) + ``` + +##### Custom model initialization + +`managed_session()` only supports initializing the model by running an +`init_op` or restoring from the latest checkpoint. If you have special +initialization needs, see how to specify a `local_init_op` when creating the +supervisor. You can also use the `SessionManager` directly to create a +session and check if it could be initialized automatically. + +- - - + +#### `tf.train.Supervisor.__init__(graph=None, ready_op=0, is_chief=True, init_op=0, init_feed_dict=None, local_init_op=0, logdir=None, summary_op=0, saver=0, global_step=0, save_summaries_secs=120, save_model_secs=600, recovery_wait_secs=30, stop_grace_secs=120, checkpoint_basename='model.ckpt', session_manager=None, summary_writer=0, init_fn=None)` {#Supervisor.__init__} + +Create a `Supervisor`. + +##### Args: + + +* `graph`: A `Graph`. The graph that the model will use. Defaults to the + default `Graph`. The supervisor may add operations to the graph before + creating a session, but the graph should not be modified by the caller + after passing it to the supervisor. +* `ready_op`: 1-D string `Tensor`. This tensor is evaluated by supervisors in + `prepare_or_wait_for_session()` to check if the model is ready to use. + The model is considered ready if it returns an empty array. Defaults to + the tensor returned from `tf.report_uninitialized_variables()` If + `None`, the model is not checked for readiness. +* `is_chief`: If True, create a chief supervisor in charge of initializing + and restoring the model. If False, create a supervisor that relies + on a chief supervisor for inits and restore. +* `init_op`: `Operation`. Used by chief supervisors to initialize the model + when it can not be recovered. Defaults to an `Operation` that + initializes all variables. If `None`, no initialization is done + automatically unless you pass a value for `init_fn`, see below. +* `init_feed_dict`: A dictionary that maps `Tensor` objects to feed values. + This feed dictionary will be used when `init_op` is evaluated. +* `local_init_op`: `Operation`. Used by all supervisors to run initializations + that should run for every new supervisor instance. By default these + are table initializers and initializers for local variables. + If `None`, no further per supervisor-instance initialization is + done automatically. +* `logdir`: A string. Optional path to a directory where to checkpoint the + model and log events for the visualizer. Used by chief supervisors. + The directory will be created if it does not exist. +* `summary_op`: An `Operation` that returns a Summary for the event logs. + Used by chief supervisors if a `logdir` was specified. Defaults to the + operation returned from merge_all_summaries(). If `None`, summaries are + not computed automatically. +* `saver`: A Saver object. Used by chief supervisors if a `logdir` was + specified. Defaults to the saved returned by Saver(). + If `None`, the model is not saved automatically. +* `global_step`: An integer Tensor of size 1 that counts steps. The value + from 'global_step' is used in summaries and checkpoint filenames. + Default to the op named 'global_step' in the graph if it exists, is of + rank 1, size 1, and of type tf.int32 ot tf.int64. If `None` the global + step is not recorded in summaries and checkpoint files. Used by chief + supervisors if a `logdir` was specified. +* `save_summaries_secs`: Number of seconds between the computation of + summaries for the event log. Defaults to 120 seconds. Pass 0 to + disable summaries. +* `save_model_secs`: Number of seconds between the creation of model + checkpoints. Defaults to 600 seconds. Pass 0 to disable checkpoints. +* `recovery_wait_secs`: Number of seconds between checks that the model + is ready. Used by supervisors when waiting for a chief supervisor + to initialize or restore the model. Defaults to 30 seconds. +* `stop_grace_secs`: Grace period, in seconds, given to running threads to + stop when `stop()` is called. Defaults to 120 seconds. +* `checkpoint_basename`: The basename for checkpoint saving. +* `session_manager`: `SessionManager`, which manages Session creation and + recovery. If it is `None`, a default `SessionManager` will be created + with the set of arguments passed in for backwards compatibility. +* `summary_writer`: `SummaryWriter` to use or `USE_DEFAULT`. Can be `None` + to indicate that no summaries should be written. +* `init_fn`: Optional callable used to initialize the model. Called + after the optional `init_op` is called. The callable must accept one + argument, the session being initialized. + +##### Returns: + + A `Supervisor`. + + +- - - + +#### `tf.train.Supervisor.managed_session(master='', config=None, start_standard_services=True, close_summary_writer=True)` {#Supervisor.managed_session} + +Returns a context manager for a managed session. + +This context manager creates and automatically recovers a session. It +optionally starts the standard services that handle checkpoints and +summaries. It monitors exceptions raised from the `with` block or from the +services and stops the supervisor as needed. + +The context manager is typically used as follows: + +```python +def train(): + sv = tf.train.Supervisor(...) + with sv.managed_session() as sess: + for step in xrange(..): + if sv.should_stop(): + break + sess.run() + ...do other things needed at each training step... +``` + +An exception raised from the `with` block or one of the service threads is +raised again when the block exits. This is done after stopping all threads +and closing the session. For example, an `AbortedError` exception, raised +in case of preemption of one of the workers in a distributed model, is +raised again when the block exits. + +If you want to retry the training loop in case of preemption you can do it +as follows: + +```python +def main(...): + while True + try: + train() + except tf.errors.Aborted: + pass +``` + +As a special case, exceptions used for control flow, such as +`OutOfRangeError` which reports that input queues are exhausted, are not +raised again from the `with` block: they indicate a clean termination of +the training loop and are considered normal termination. + +##### Args: + + +* `master`: name of the TensorFlow master to use. See the `tf.Session` + constructor for how this is interpreted. +* `config`: Optional `ConfigProto` proto used to configure the session. + Passed as-is to create the session. +* `start_standard_services`: Whether to start the standard services, + such as checkpoint, summary and step counter. +* `close_summary_writer`: Whether to close the summary writer when + closing the session. Defaults to True. + +##### Returns: + + A context manager that yields a `Session` restored from the latest + checkpoint or initialized from scratch if not checkpoint exists. The + session is closed when the `with` block exits. + + +- - - + +#### `tf.train.Supervisor.prepare_or_wait_for_session(master='', config=None, wait_for_checkpoint=False, max_wait_secs=7200, start_standard_services=True)` {#Supervisor.prepare_or_wait_for_session} + +Make sure the model is ready to be used. + +Create a session on 'master', recovering or initializing the model as +needed, or wait for a session to be ready. If running as the chief +and `start_standard_service` is set to True, also call the session +manager to start the standard services. + +##### Args: + + +* `master`: name of the TensorFlow master to use. See the `tf.Session` + constructor for how this is interpreted. +* `config`: Optional ConfigProto proto used to configure the session, + which is passed as-is to create the session. +* `wait_for_checkpoint`: Whether we should wait for the availability of a + checkpoint before creating Session. Defaults to False. +* `max_wait_secs`: Maximum time to wait for the session to become available. +* `start_standard_services`: Whether to start the standard services and the + queue runners. + +##### Returns: + + A Session object that can be used to drive the model. + + +- - - + +#### `tf.train.Supervisor.start_standard_services(sess)` {#Supervisor.start_standard_services} + +Start the standard services for 'sess'. + +This starts services in the background. The services started depend +on the parameters to the constructor and may include: + + - A Summary thread computing summaries every save_summaries_secs. + - A Checkpoint thread saving the model every save_model_secs. + - A StepCounter thread measure step time. + +##### Args: + + +* `sess`: A Session. + +##### Returns: + + A list of threads that are running the standard services. You can use + the Supervisor's Coordinator to join these threads with: + sv.coord.Join() + +##### Raises: + + +* `RuntimeError`: If called with a non-chief Supervisor. +* `ValueError`: If not `logdir` was passed to the constructor as the + services need a log directory. + + +- - - + +#### `tf.train.Supervisor.start_queue_runners(sess, queue_runners=None)` {#Supervisor.start_queue_runners} + +Start threads for `QueueRunners`. + +Note that the queue runners collected in the graph key `QUEUE_RUNNERS` +are already started automatically when you create a session with the +supervisor, so unless you have non-collected queue runners to start +you do not need to call this explicitely. + +##### Args: + + +* `sess`: A `Session`. +* `queue_runners`: A list of `QueueRunners`. If not specified, we'll use the + list of queue runners gathered in the graph under the key + `GraphKeys.QUEUE_RUNNERS`. + +##### Returns: + + The list of threads started for the `QueueRunners`. + + +- - - + +#### `tf.train.Supervisor.summary_computed(sess, summary, global_step=None)` {#Supervisor.summary_computed} + +Indicate that a summary was computed. + +##### Args: + + +* `sess`: A `Session` object. +* `summary`: A Summary proto, or a string holding a serialized summary proto. +* `global_step`: Int. global step this summary is associated with. If `None`, + it will try to fetch the current step. + +##### Raises: + + +* `TypeError`: if 'summary' is not a Summary proto or a string. +* `RuntimeError`: if the Supervisor was created without a `logdir`. + + + +- - - + +#### `tf.train.Supervisor.stop(threads=None, close_summary_writer=True)` {#Supervisor.stop} + +Stop the services and the coordinator. + +This does not close the session. + +##### Args: + + +* `threads`: Optional list of threads to join with the coordinator. If + `None`, defaults to the threads running the standard services, the + threads started for `QueueRunners`, and the threads started by the + `loop()` method. To wait on additional threads, pass the + list in this parameter. +* `close_summary_writer`: Whether to close the `summary_writer`. Defaults to + `True` if the summary writer was created by the supervisor, `False` + otherwise. + + +- - - + +#### `tf.train.Supervisor.request_stop(ex=None)` {#Supervisor.request_stop} + +Request that the coordinator stop the threads. + +See `Coordinator.request_stop()`. + +##### Args: + + +* `ex`: Optional `Exception`, or Python `exc_info` tuple as returned by + `sys.exc_info()`. If this is the first call to `request_stop()` the + corresponding exception is recorded and re-raised from `join()`. + + +- - - + +#### `tf.train.Supervisor.should_stop()` {#Supervisor.should_stop} + +Check if the coordinator was told to stop. + +See `Coordinator.should_stop()`. + +##### Returns: + + True if the coordinator was told to stop, False otherwise. + + +- - - + +#### `tf.train.Supervisor.stop_on_exception()` {#Supervisor.stop_on_exception} + +Context handler to stop the supervisor when an exception is raised. + +See `Coordinator.stop_on_exception()`. + +##### Returns: + + A context handler. + + +- - - + +#### `tf.train.Supervisor.wait_for_stop()` {#Supervisor.wait_for_stop} + +Block waiting for the coordinator to stop. + + + +#### Other Methods +- - - + +#### `tf.train.Supervisor.Loop(timer_interval_secs, target, args=None, kwargs=None)` {#Supervisor.Loop} + +Start a LooperThread that calls a function periodically. + +If `timer_interval_secs` is None the thread calls `target(*args, **kwargs)` +repeatedly. Otherwise it calls it every `timer_interval_secs` +seconds. The thread terminates when a stop is requested. + +The started thread is added to the list of threads managed by the supervisor +so it does not need to be passed to the `stop()` method. + +##### Args: + + +* `timer_interval_secs`: Number. Time boundaries at which to call `target`. +* `target`: A callable object. +* `args`: Optional arguments to pass to `target` when calling it. +* `kwargs`: Optional keyword arguments to pass to `target` when calling it. + +##### Returns: + + The started thread. + + +- - - + +#### `tf.train.Supervisor.PrepareSession(master='', config=None, wait_for_checkpoint=False, max_wait_secs=7200, start_standard_services=True)` {#Supervisor.PrepareSession} + +Make sure the model is ready to be used. + +Create a session on 'master', recovering or initializing the model as +needed, or wait for a session to be ready. If running as the chief +and `start_standard_service` is set to True, also call the session +manager to start the standard services. + +##### Args: + + +* `master`: name of the TensorFlow master to use. See the `tf.Session` + constructor for how this is interpreted. +* `config`: Optional ConfigProto proto used to configure the session, + which is passed as-is to create the session. +* `wait_for_checkpoint`: Whether we should wait for the availability of a + checkpoint before creating Session. Defaults to False. +* `max_wait_secs`: Maximum time to wait for the session to become available. +* `start_standard_services`: Whether to start the standard services and the + queue runners. + +##### Returns: + + A Session object that can be used to drive the model. + + +- - - + +#### `tf.train.Supervisor.RequestStop(ex=None)` {#Supervisor.RequestStop} + +Request that the coordinator stop the threads. + +See `Coordinator.request_stop()`. + +##### Args: + + +* `ex`: Optional `Exception`, or Python `exc_info` tuple as returned by + `sys.exc_info()`. If this is the first call to `request_stop()` the + corresponding exception is recorded and re-raised from `join()`. + + +- - - + +#### `tf.train.Supervisor.ShouldStop()` {#Supervisor.ShouldStop} + +Check if the coordinator was told to stop. + +See `Coordinator.should_stop()`. + +##### Returns: + + True if the coordinator was told to stop, False otherwise. + + +- - - + +#### `tf.train.Supervisor.StartQueueRunners(sess, queue_runners=None)` {#Supervisor.StartQueueRunners} + +Start threads for `QueueRunners`. + +Note that the queue runners collected in the graph key `QUEUE_RUNNERS` +are already started automatically when you create a session with the +supervisor, so unless you have non-collected queue runners to start +you do not need to call this explicitely. + +##### Args: + + +* `sess`: A `Session`. +* `queue_runners`: A list of `QueueRunners`. If not specified, we'll use the + list of queue runners gathered in the graph under the key + `GraphKeys.QUEUE_RUNNERS`. + +##### Returns: + + The list of threads started for the `QueueRunners`. + + +- - - + +#### `tf.train.Supervisor.StartStandardServices(sess)` {#Supervisor.StartStandardServices} + +Start the standard services for 'sess'. + +This starts services in the background. The services started depend +on the parameters to the constructor and may include: + + - A Summary thread computing summaries every save_summaries_secs. + - A Checkpoint thread saving the model every save_model_secs. + - A StepCounter thread measure step time. + +##### Args: + + +* `sess`: A Session. + +##### Returns: + + A list of threads that are running the standard services. You can use + the Supervisor's Coordinator to join these threads with: + sv.coord.Join() + +##### Raises: + + +* `RuntimeError`: If called with a non-chief Supervisor. +* `ValueError`: If not `logdir` was passed to the constructor as the + services need a log directory. + + +- - - + +#### `tf.train.Supervisor.Stop(threads=None, close_summary_writer=True)` {#Supervisor.Stop} + +Stop the services and the coordinator. + +This does not close the session. + +##### Args: + + +* `threads`: Optional list of threads to join with the coordinator. If + `None`, defaults to the threads running the standard services, the + threads started for `QueueRunners`, and the threads started by the + `loop()` method. To wait on additional threads, pass the + list in this parameter. +* `close_summary_writer`: Whether to close the `summary_writer`. Defaults to + `True` if the summary writer was created by the supervisor, `False` + otherwise. + + +- - - + +#### `tf.train.Supervisor.StopOnException()` {#Supervisor.StopOnException} + +Context handler to stop the supervisor when an exception is raised. + +See `Coordinator.stop_on_exception()`. + +##### Returns: + + A context handler. + + +- - - + +#### `tf.train.Supervisor.SummaryComputed(sess, summary, global_step=None)` {#Supervisor.SummaryComputed} + +Indicate that a summary was computed. + +##### Args: + + +* `sess`: A `Session` object. +* `summary`: A Summary proto, or a string holding a serialized summary proto. +* `global_step`: Int. global step this summary is associated with. If `None`, + it will try to fetch the current step. + +##### Raises: + + +* `TypeError`: if 'summary' is not a Summary proto or a string. +* `RuntimeError`: if the Supervisor was created without a `logdir`. + + +- - - + +#### `tf.train.Supervisor.WaitForStop()` {#Supervisor.WaitForStop} + +Block waiting for the coordinator to stop. + + +- - - + +#### `tf.train.Supervisor.coord` {#Supervisor.coord} + +Return the Coordinator used by the Supervisor. + +The Coordinator can be useful if you want to run multiple threads +during your training. + +##### Returns: + + A Coordinator object. + + +- - - + +#### `tf.train.Supervisor.global_step` {#Supervisor.global_step} + +Return the global_step Tensor used by the supervisor. + +##### Returns: + + An integer Tensor for the global_step. + + +- - - + +#### `tf.train.Supervisor.init_feed_dict` {#Supervisor.init_feed_dict} + +Return the feed dictionary used when evaluating the `init_op`. + +##### Returns: + + A feed dictionary or `None`. + + +- - - + +#### `tf.train.Supervisor.init_op` {#Supervisor.init_op} + +Return the Init Op used by the supervisor. + +##### Returns: + + An Op or `None`. + + +- - - + +#### `tf.train.Supervisor.is_chief` {#Supervisor.is_chief} + +Return True if this is a chief supervisor. + +##### Returns: + + A bool. + + +- - - + +#### `tf.train.Supervisor.loop(timer_interval_secs, target, args=None, kwargs=None)` {#Supervisor.loop} + +Start a LooperThread that calls a function periodically. + +If `timer_interval_secs` is None the thread calls `target(*args, **kwargs)` +repeatedly. Otherwise it calls it every `timer_interval_secs` +seconds. The thread terminates when a stop is requested. + +The started thread is added to the list of threads managed by the supervisor +so it does not need to be passed to the `stop()` method. + +##### Args: + + +* `timer_interval_secs`: Number. Time boundaries at which to call `target`. +* `target`: A callable object. +* `args`: Optional arguments to pass to `target` when calling it. +* `kwargs`: Optional keyword arguments to pass to `target` when calling it. + +##### Returns: + + The started thread. + + +- - - + +#### `tf.train.Supervisor.ready_op` {#Supervisor.ready_op} + +Return the Ready Op used by the supervisor. + +##### Returns: + + An Op or `None`. + + +- - - + +#### `tf.train.Supervisor.save_model_secs` {#Supervisor.save_model_secs} + +Return the delay between checkpoints. + +##### Returns: + + A timestamp. + + +- - - + +#### `tf.train.Supervisor.save_path` {#Supervisor.save_path} + +Return the save path used by the supervisor. + +##### Returns: + + A string. + + +- - - + +#### `tf.train.Supervisor.save_summaries_secs` {#Supervisor.save_summaries_secs} + +Return the delay between summary computations. + +##### Returns: + + A timestamp. + + +- - - + +#### `tf.train.Supervisor.saver` {#Supervisor.saver} + +Return the Saver used by the supervisor. + +##### Returns: + + A Saver object. + + +- - - + +#### `tf.train.Supervisor.session_manager` {#Supervisor.session_manager} + +Return the SessionManager used by the Supervisor. + +##### Returns: + + A SessionManager object. + + +- - - + +#### `tf.train.Supervisor.summary_op` {#Supervisor.summary_op} + +Return the Summary Tensor used by the chief supervisor. + +##### Returns: + + A string Tensor for the summary or `None`. + + +- - - + +#### `tf.train.Supervisor.summary_writer` {#Supervisor.summary_writer} + +Return the SummaryWriter used by the chief supervisor. + +##### Returns: + + A SummaryWriter. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.get_checkpoint_state.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.get_checkpoint_state.md deleted file mode 100644 index e2852b2314..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.get_checkpoint_state.md +++ /dev/null @@ -1,19 +0,0 @@ -### `tf.train.get_checkpoint_state(checkpoint_dir, latest_filename=None)` {#get_checkpoint_state} - -Returns CheckpointState proto from the "checkpoint" file. - -If the "checkpoint" file contains a valid CheckpointState -proto, returns it. - -##### Args: - - -* `checkpoint_dir`: The directory of checkpoints. -* `latest_filename`: Optional name of the checkpoint file. Default to - 'checkpoint'. - -##### Returns: - - A CheckpointState if the state was available, None - otherwise. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.input_producer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.input_producer.md deleted file mode 100644 index 41a417aac3..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.input_producer.md +++ /dev/null @@ -1,38 +0,0 @@ -### `tf.train.input_producer(input_tensor, element_shape=None, num_epochs=None, shuffle=True, seed=None, capacity=32, shared_name=None, summary_name=None, name=None)` {#input_producer} - -Output the rows of `input_tensor` to a queue for an input pipeline. - -##### Args: - - -* `input_tensor`: A tensor with the rows to produce. Must be at - one-dimensional. Must either have a fully-defined shape, or - `element_shape` must be defined. -* `element_shape`: (Optional.) A `TensorShape` representing the shape of a - row of `input_tensor`, if it cannot be inferred. -* `num_epochs`: (Optional.) An integer. If specified `input_producer` produces - each row of `input_tensor` `num_epochs` times before generating an - `OutOfRange` error. If not specified, `input_producer` can cycle through - the rows of `input_tensor` an unlimited number of times. -* `shuffle`: (Optional.) A boolean. If true, the rows are randomly shuffled - within each eopch. -* `seed`: (Optional.) An integer. The seed to use if `shuffle` is true. -* `capacity`: (Optional.) The capacity of the queue to be used for buffering - the input. -* `shared_name`: (Optional.) If set, this queue will be shared under the given - name across multiple sessions. -* `summary_name`: (Optional.) If set, a scalar summary for the current queue - size will be generated, using this name as part of the tag. -* `name`: (Optional.) A name for queue. - -##### Returns: - - A queue with the output rows. A `QueueRunner` for the queue is - added to the current `QUEUE_RUNNER` collection of the current - graph. - -##### Raises: - - -* `ValueError`: If the shape of the input cannot be inferred from the arguments. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.start_queue_runners.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.start_queue_runners.md deleted file mode 100644 index 21ac6efee8..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.start_queue_runners.md +++ /dev/null @@ -1,24 +0,0 @@ -### `tf.train.start_queue_runners(sess=None, coord=None, daemon=True, start=True, collection='queue_runners')` {#start_queue_runners} - -Starts all queue runners collected in the graph. - -This is a companion method to `add_queue_runner()`. It just starts -threads for all queue runners collected in the graph. It returns -the list of all threads. - -##### Args: - - -* `sess`: `Session` used to run the queue ops. Defaults to the - default session. -* `coord`: Optional `Coordinator` for coordinating the started threads. -* `daemon`: Whether the threads should be marked as `daemons`, meaning - they don't block program exit. -* `start`: Set to `False` to only create the threads, not start them. -* `collection`: A `GraphKey` specifying the graph collection to - get the queue runners from. Defaults to `GraphKeys.QUEUE_RUNNERS`. - -##### Returns: - - A list of threads. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.string_input_producer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.string_input_producer.md new file mode 100644 index 0000000000..5ca2a4cb86 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.string_input_producer.md @@ -0,0 +1,32 @@ +### `tf.train.string_input_producer(string_tensor, num_epochs=None, shuffle=True, seed=None, capacity=32, shared_name=None, name=None)` {#string_input_producer} + +Output strings (e.g. filenames) to a queue for an input pipeline. + +##### Args: + + +* `string_tensor`: A 1-D string tensor with the strings to produce. +* `num_epochs`: An integer (optional). If specified, `string_input_producer` + produces each string from `string_tensor` `num_epochs` times before + generating an `OutOfRange` error. If not specified, + `string_input_producer` can cycle through the strings in `string_tensor` + an unlimited number of times. +* `shuffle`: Boolean. If true, the strings are randomly shuffled within each + epoch. +* `seed`: An integer (optional). Seed used if shuffle == True. +* `capacity`: An integer. Sets the queue capacity. +* `shared_name`: (optional). If set, this queue will be shared under the given + name across multiple sessions. +* `name`: A name for the operations (optional). + +##### Returns: + + A queue with the output strings. A `QueueRunner` for the Queue + is added to the current `Graph`'s `QUEUE_RUNNER` collection. + +##### Raises: + + +* `ValueError`: If the string_tensor is a null Python list. At runtime, + will fail with an assertion if string_tensor becomes a null tensor. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.update_checkpoint_state.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.update_checkpoint_state.md deleted file mode 100644 index 68747fc0c7..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.update_checkpoint_state.md +++ /dev/null @@ -1,24 +0,0 @@ -### `tf.train.update_checkpoint_state(save_dir, model_checkpoint_path, all_model_checkpoint_paths=None, latest_filename=None)` {#update_checkpoint_state} - -Updates the content of the 'checkpoint' file. - -This updates the checkpoint file containing a CheckpointState -proto. - -##### Args: - - -* `save_dir`: Directory where the model was saved. -* `model_checkpoint_path`: The checkpoint file. -* `all_model_checkpoint_paths`: List of strings. Paths to all not-yet-deleted - checkpoints, sorted from oldest to newest. If this is a non-empty list, - the last element must be equal to model_checkpoint_path. These paths - are also saved in the CheckpointState proto. -* `latest_filename`: Optional name of the checkpoint file. Default to - 'checkpoint'. - -##### Raises: - - -* `RuntimeError`: If the save paths conflict. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.transpose.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.transpose.md new file mode 100644 index 0000000000..c6b76c7824 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.transpose.md @@ -0,0 +1,49 @@ +### `tf.transpose(a, perm=None, name='transpose')` {#transpose} + +Transposes `a`. Permutes the dimensions according to `perm`. + +The returned tensor's dimension i will correspond to the input dimension +`perm[i]`. If `perm` is not given, it is set to (n-1...0), where n is +the rank of the input tensor. Hence by default, this operation performs a +regular matrix transpose on 2-D input Tensors. + +For example: + +```python +# 'x' is [[1 2 3] +# [4 5 6]] +tf.transpose(x) ==> [[1 4] + [2 5] + [3 6]] + +# Equivalently +tf.transpose(x, perm=[1, 0]) ==> [[1 4] + [2 5] + [3 6]] + +# 'perm' is more useful for n-dimensional tensors, for n > 2 +# 'x' is [[[1 2 3] +# [4 5 6]] +# [[7 8 9] +# [10 11 12]]] +# Take the transpose of the matrices in dimension-0 +tf.transpose(x, perm=[0, 2, 1]) ==> [[[1 4] + [2 5] + [3 6]] + + [[7 10] + [8 11] + [9 12]]] +``` + +##### Args: + + +* `a`: A `Tensor`. +* `perm`: A permutation of the dimensions of `a`. +* `name`: A name for the operation (optional). + +##### Returns: + + A transposed `Tensor`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.truncated_normal_initializer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.truncated_normal_initializer.md deleted file mode 100644 index 0a335333e2..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.truncated_normal_initializer.md +++ /dev/null @@ -1,31 +0,0 @@ -### `tf.truncated_normal_initializer(mean=0.0, stddev=1.0, seed=None, dtype=tf.float32)` {#truncated_normal_initializer} - -Returns an initializer that generates a truncated normal distribution. - -These values are similar to values from a `random_normal_initializer` -except that values more than two standard deviations from the mean -are discarded and re-drawn. This is the recommended initializer for -neural network weights and filters. - -##### Args: - - -* `mean`: a python scalar or a scalar tensor. Mean of the random values - to generate. -* `stddev`: a python scalar or a scalar tensor. Standard deviation of the - random values to generate. -* `seed`: A Python integer. Used to create random seeds. See - [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) - for behavior. -* `dtype`: The data type. Only floating point types are supported. - -##### Returns: - - An initializer that generates tensors with a truncated normal - distribution. - -##### Raises: - - -* `ValueError`: if `dtype` is not a floating point type. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.uniform_unit_scaling_initializer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.uniform_unit_scaling_initializer.md new file mode 100644 index 0000000000..6033fbf53a --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.uniform_unit_scaling_initializer.md @@ -0,0 +1,47 @@ +### `tf.uniform_unit_scaling_initializer(factor=1.0, seed=None, dtype=tf.float32, full_shape=None)` {#uniform_unit_scaling_initializer} + +Returns an initializer that generates tensors without scaling variance. + +When initializing a deep network, it is in principle advantageous to keep +the scale of the input variance constant, so it does not explode or diminish +by reaching the final layer. If the input is `x` and the operation `x * W`, +and we want to initialize `W` uniformly at random, we need to pick `W` from + + [-sqrt(3) / sqrt(dim), sqrt(3) / sqrt(dim)] + +to keep the scale intact, where `dim = W.shape[0]` (the size of the input). +A similar calculation for convolutional networks gives an analogous result +with `dim` equal to the product of the first 3 dimensions. When +nonlinearities are present, we need to multiply this by a constant `factor`. +See [Sussillo et al., 2014](https://arxiv.org/abs/1412.6558) +([pdf](http://arxiv.org/pdf/1412.6558.pdf)) for deeper motivation, experiments +and the calculation of constants. In section 2.3 there, the constants were +numerically computed: for a linear layer it's 1.0, relu: ~1.43, tanh: ~1.15. + +If the shape tuple `full_shape` is provided, the scale will be calculated from +this predefined shape. This is useful when a `Variable` is being partitioned +across several shards, and each shard has a smaller shape than the whole. +Since the shards are usually concatenated when used, the scale should be +based on the shape of the whole. + +##### Args: + + +* `factor`: Float. A multiplicative factor by which the values will be scaled. +* `seed`: A Python integer. Used to create random seeds. See + [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) + for behavior. +* `dtype`: The data type. Only floating point types are supported. +* `full_shape`: Tuple or list of integers. The shape used for calculating + scale normalization (instead of the shape passed at creation time). + Useful when creating sharded variables via partitioning. + +##### Returns: + + An initializer that generates tensors with unit variance. + +##### Raises: + + +* `ValueError`: if `dtype` is not a floating point type. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.unpack.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.unpack.md new file mode 100644 index 0000000000..cc4884c720 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.unpack.md @@ -0,0 +1,32 @@ +### `tf.unpack(value, num=None, name='unpack')` {#unpack} + +Unpacks the outer dimension of a rank-`R` tensor into rank-`(R-1)` tensors. + +Unpacks `num` tensors from `value` along the first dimension. +If `num` is not specified (the default), it is inferred from `value`'s shape. +If `value.shape[0]` is not known, `ValueError` is raised. + +The ith tensor in `output` is the slice `value[i, ...]`. Each tensor in +`output` has shape `value.shape[1:]`. + +This is the opposite of pack. The numpy equivalent is + + tf.unpack(x, n) = list(x) + +##### Args: + + +* `value`: A rank `R > 0` `Tensor` to be unpacked. +* `num`: An `int`. The first dimension of value. Automatically inferred if + `None` (the default). +* `name`: A name for the operation (optional). + +##### Returns: + + The list of `Tensor` objects unpacked from `value`. + +##### Raises: + + +* `ValueError`: If `num` is unspecified and cannot be inferred. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.DType.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.DType.md new file mode 100644 index 0000000000..4c77a143e0 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.DType.md @@ -0,0 +1,206 @@ +Represents the type of the elements in a `Tensor`. + +The following `DType` objects are defined: + +* `tf.float16`: 16-bit half-precision floating-point. +* `tf.float32`: 32-bit single-precision floating-point. +* `tf.float64`: 64-bit double-precision floating-point. +* `tf.bfloat16`: 16-bit truncated floating-point. +* `tf.complex64`: 64-bit single-precision complex. +* `tf.complex128`: 128-bit double-precision complex. + +* `tf.int8`: 8-bit signed integer. +* `tf.uint8`: 8-bit unsigned integer. +* `tf.uint16`: 16-bit unsigned integer. +* `tf.int16`: 16-bit signed integer. +* `tf.int32`: 32-bit signed integer. +* `tf.int64`: 64-bit signed integer. + +* `tf.bool`: Boolean. + +* `tf.string`: String. + +* `tf.qint8`: Quantized 8-bit signed integer. +* `tf.quint8`: Quantized 8-bit unsigned integer. +* `tf.qint16`: Quantized 16-bit signed integer. +* `tf.quint16`: Quantized 16-bit unsigned integer. +* `tf.qint32`: Quantized 32-bit signed integer. + +In addition, variants of these types with the `_ref` suffix are +defined for reference-typed tensors. + +The `tf.as_dtype()` function converts numpy types and string type +names to a `DType` object. + +- - - + +#### `tf.DType.is_compatible_with(other)` {#DType.is_compatible_with} + +Returns True if the `other` DType will be converted to this DType. + +The conversion rules are as follows: + +``` +DType(T) .is_compatible_with(DType(T)) == True +DType(T) .is_compatible_with(DType(T).as_ref) == True +DType(T).as_ref.is_compatible_with(DType(T)) == False +DType(T).as_ref.is_compatible_with(DType(T).as_ref) == True +``` + +##### Args: + + +* `other`: A `DType` (or object that may be converted to a `DType`). + +##### Returns: + + True if a Tensor of the `other` `DType` will be implicitly converted to + this `DType`. + + +- - - + +#### `tf.DType.name` {#DType.name} + +Returns the string name for this `DType`. + + +- - - + +#### `tf.DType.base_dtype` {#DType.base_dtype} + +Returns a non-reference `DType` based on this `DType`. + + +- - - + +#### `tf.DType.real_dtype` {#DType.real_dtype} + +Returns the dtype correspond to this dtype's real part. + + +- - - + +#### `tf.DType.is_ref_dtype` {#DType.is_ref_dtype} + +Returns `True` if this `DType` represents a reference type. + + +- - - + +#### `tf.DType.as_ref` {#DType.as_ref} + +Returns a reference `DType` based on this `DType`. + + +- - - + +#### `tf.DType.is_floating` {#DType.is_floating} + +Returns whether this is a (real) floating point type. + + +- - - + +#### `tf.DType.is_complex` {#DType.is_complex} + +Returns whether this is a complex floating point type. + + +- - - + +#### `tf.DType.is_integer` {#DType.is_integer} + +Returns whether this is a (non-quantized) integer type. + + +- - - + +#### `tf.DType.is_quantized` {#DType.is_quantized} + +Returns whether this is a quantized data type. + + +- - - + +#### `tf.DType.is_unsigned` {#DType.is_unsigned} + +Returns whether this type is unsigned. + +Non-numeric, unordered, and quantized types are not considered unsigned, and +this function returns `False`. + +##### Returns: + + Whether a `DType` is unsigned. + + + +- - - + +#### `tf.DType.as_numpy_dtype` {#DType.as_numpy_dtype} + +Returns a `numpy.dtype` based on this `DType`. + + +- - - + +#### `tf.DType.as_datatype_enum` {#DType.as_datatype_enum} + +Returns a `types_pb2.DataType` enum value based on this `DType`. + + + +#### Other Methods +- - - + +#### `tf.DType.__init__(type_enum)` {#DType.__init__} + +Creates a new `DataType`. + +NOTE(mrry): In normal circumstances, you should not need to +construct a `DataType` object directly. Instead, use the +`tf.as_dtype()` function. + +##### Args: + + +* `type_enum`: A `types_pb2.DataType` enum value. + +##### Raises: + + +* `TypeError`: If `type_enum` is not a value `types_pb2.DataType`. + + +- - - + +#### `tf.DType.max` {#DType.max} + +Returns the maximum representable value in this data type. + +##### Raises: + + +* `TypeError`: if this is a non-numeric, unordered, or quantized type. + + +- - - + +#### `tf.DType.min` {#DType.min} + +Returns the minimum representable value in this data type. + +##### Raises: + + +* `TypeError`: if this is a non-numeric, unordered, or quantized type. + + +- - - + +#### `tf.DType.size` {#DType.size} + + + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.Dimension.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.Dimension.md new file mode 100644 index 0000000000..f149b6cb65 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.Dimension.md @@ -0,0 +1,83 @@ +Represents the value of one dimension in a TensorShape. +- - - + +#### `tf.Dimension.__init__(value)` {#Dimension.__init__} + +Creates a new Dimension with the given value. + + +- - - + +#### `tf.Dimension.assert_is_compatible_with(other)` {#Dimension.assert_is_compatible_with} + +Raises an exception if `other` is not compatible with this Dimension. + +##### Args: + + +* `other`: Another Dimension. + +##### Raises: + + +* `ValueError`: If `self` and `other` are not compatible (see + is_compatible_with). + + +- - - + +#### `tf.Dimension.is_compatible_with(other)` {#Dimension.is_compatible_with} + +Returns true if `other` is compatible with this Dimension. + +Two known Dimensions are compatible if they have the same value. +An unknown Dimension is compatible with all other Dimensions. + +##### Args: + + +* `other`: Another Dimension. + +##### Returns: + + True if this Dimension and `other` are compatible. + + +- - - + +#### `tf.Dimension.merge_with(other)` {#Dimension.merge_with} + +Returns a Dimension that combines the information in `self` and `other`. + +Dimensions are combined as follows: + + Dimension(n) .merge_with(Dimension(n)) == Dimension(n) + Dimension(n) .merge_with(Dimension(None)) == Dimension(n) + Dimension(None).merge_with(Dimension(n)) == Dimension(n) + Dimension(None).merge_with(Dimension(None)) == Dimension(None) + Dimension(n) .merge_with(Dimension(m)) raises ValueError for n != m + +##### Args: + + +* `other`: Another Dimension. + +##### Returns: + + A Dimension containing the combined information of `self` and + `other`. + +##### Raises: + + +* `ValueError`: If `self` and `other` are not compatible (see + is_compatible_with). + + +- - - + +#### `tf.Dimension.value` {#Dimension.value} + +The value of this dimension, or None if it is unknown. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.FixedLenSequenceFeature.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.FixedLenSequenceFeature.md new file mode 100644 index 0000000000..607b81a9bf --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.FixedLenSequenceFeature.md @@ -0,0 +1,31 @@ +Configuration for a dense input feature in a sequence item. + +To treat a sparse input as dense, provide `allow_missing=True`; otherwise, +the parse functions will fail on any examples missing this feature. + +Fields: + shape: Shape of input data. + dtype: Data type of input. + allow_missing: Whether to allow this feature to be missing from a feature + list item. +- - - + +#### `tf.FixedLenSequenceFeature.allow_missing` {#FixedLenSequenceFeature.allow_missing} + +Alias for field number 2 + + +- - - + +#### `tf.FixedLenSequenceFeature.dtype` {#FixedLenSequenceFeature.dtype} + +Alias for field number 1 + + +- - - + +#### `tf.FixedLenSequenceFeature.shape` {#FixedLenSequenceFeature.shape} + +Alias for field number 0 + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.FixedLengthRecordReader.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.FixedLengthRecordReader.md new file mode 100644 index 0000000000..e8a94cb825 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.FixedLengthRecordReader.md @@ -0,0 +1,148 @@ +A Reader that outputs fixed-length records from a file. + +See ReaderBase for supported methods. +- - - + +#### `tf.FixedLengthRecordReader.__init__(record_bytes, header_bytes=None, footer_bytes=None, name=None)` {#FixedLengthRecordReader.__init__} + +Create a FixedLengthRecordReader. + +##### Args: + + +* `record_bytes`: An int. +* `header_bytes`: An optional int. Defaults to 0. +* `footer_bytes`: An optional int. Defaults to 0. +* `name`: A name for the operation (optional). + + +- - - + +#### `tf.FixedLengthRecordReader.num_records_produced(name=None)` {#FixedLengthRecordReader.num_records_produced} + +Returns the number of records this reader has produced. + +This is the same as the number of Read executions that have +succeeded. + +##### Args: + + +* `name`: A name for the operation (optional). + +##### Returns: + + An int64 Tensor. + + +- - - + +#### `tf.FixedLengthRecordReader.num_work_units_completed(name=None)` {#FixedLengthRecordReader.num_work_units_completed} + +Returns the number of work units this reader has finished processing. + +##### Args: + + +* `name`: A name for the operation (optional). + +##### Returns: + + An int64 Tensor. + + +- - - + +#### `tf.FixedLengthRecordReader.read(queue, name=None)` {#FixedLengthRecordReader.read} + +Returns the next record (key, value pair) produced by a reader. + +Will dequeue a work unit from queue if necessary (e.g. when the +Reader needs to start reading from a new file since it has +finished with the previous file). + +##### Args: + + +* `queue`: A Queue or a mutable string Tensor representing a handle + to a Queue, with string work items. +* `name`: A name for the operation (optional). + +##### Returns: + + A tuple of Tensors (key, value). + +* `key`: A string scalar Tensor. +* `value`: A string scalar Tensor. + + +- - - + +#### `tf.FixedLengthRecordReader.reader_ref` {#FixedLengthRecordReader.reader_ref} + +Op that implements the reader. + + +- - - + +#### `tf.FixedLengthRecordReader.reset(name=None)` {#FixedLengthRecordReader.reset} + +Restore a reader to its initial clean state. + +##### Args: + + +* `name`: A name for the operation (optional). + +##### Returns: + + The created Operation. + + +- - - + +#### `tf.FixedLengthRecordReader.restore_state(state, name=None)` {#FixedLengthRecordReader.restore_state} + +Restore a reader to a previously saved state. + +Not all Readers support being restored, so this can produce an +Unimplemented error. + +##### Args: + + +* `state`: A string Tensor. + Result of a SerializeState of a Reader with matching type. +* `name`: A name for the operation (optional). + +##### Returns: + + The created Operation. + + +- - - + +#### `tf.FixedLengthRecordReader.serialize_state(name=None)` {#FixedLengthRecordReader.serialize_state} + +Produce a string tensor that encodes the state of a reader. + +Not all Readers support being serialized, so this can produce an +Unimplemented error. + +##### Args: + + +* `name`: A name for the operation (optional). + +##### Returns: + + A string Tensor. + + +- - - + +#### `tf.FixedLengthRecordReader.supports_serialize` {#FixedLengthRecordReader.supports_serialize} + +Whether the Reader implementation can serialize its state. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.GraphKeys.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.GraphKeys.md new file mode 100644 index 0000000000..1d656f4018 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.GraphKeys.md @@ -0,0 +1,36 @@ +Standard names to use for graph collections. + +The standard library uses various well-known names to collect and +retrieve values associated with a graph. For example, the +`tf.Optimizer` subclasses default to optimizing the variables +collected under `tf.GraphKeys.TRAINABLE_VARIABLES` if none is +specified, but it is also possible to pass an explicit list of +variables. + +The following standard keys are defined: + +* `VARIABLES`: the `Variable` objects that comprise a model, and + must be saved and restored together. See + [`tf.all_variables()`](../../api_docs/python/state_ops.md#all_variables) + for more details. +* `TRAINABLE_VARIABLES`: the subset of `Variable` objects that will + be trained by an optimizer. See + [`tf.trainable_variables()`](../../api_docs/python/state_ops.md#trainable_variables) + for more details. +* `SUMMARIES`: the summary `Tensor` objects that have been created in the + graph. See + [`tf.merge_all_summaries()`](../../api_docs/python/train.md#merge_all_summaries) + for more details. +* `QUEUE_RUNNERS`: the `QueueRunner` objects that are used to + produce input for a computation. See + [`tf.start_queue_runners()`](../../api_docs/python/train.md#start_queue_runners) + for more details. +* `MOVING_AVERAGE_VARIABLES`: the subset of `Variable` objects that will also + keep moving averages. See + [`tf.moving_average_variables()`](../../api_docs/python/state_ops.md#moving_average_variables) + for more details. +* `REGULARIZATION_LOSSES`: regularization losses collected during graph + construction. +* `WEIGHTS`: weights inside neural network layers +* `BIASES`: biases inside neural network layers +* `ACTIVATIONS`: activations of neural network layers diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.Session.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.Session.md new file mode 100644 index 0000000000..62982698dd --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.Session.md @@ -0,0 +1,236 @@ +A class for running TensorFlow operations. + +A `Session` object encapsulates the environment in which `Operation` +objects are executed, and `Tensor` objects are evaluated. For +example: + +```python +# Build a graph. +a = tf.constant(5.0) +b = tf.constant(6.0) +c = a * b + +# Launch the graph in a session. +sess = tf.Session() + +# Evaluate the tensor `c`. +print(sess.run(c)) +``` + +A session may own resources, such as +[variables](../../api_docs/python/state_ops.md#Variable), [queues](../../api_docs/python/io_ops.md#QueueBase), +and [readers](../../api_docs/python/io_ops.md#ReaderBase). It is important to release +these resources when they are no longer required. To do this, either +invoke the [`close()`](#Session.close) method on the session, or use +the session as a context manager. The following two examples are +equivalent: + +```python +# Using the `close()` method. +sess = tf.Session() +sess.run(...) +sess.close() + +# Using the context manager. +with tf.Session() as sess: + sess.run(...) +``` + +The [`ConfigProto`] +(https://www.tensorflow.org/code/tensorflow/core/protobuf/config.proto) +protocol buffer exposes various configuration options for a +session. For example, to create a session that uses soft constraints +for device placement, and log the resulting placement decisions, +create a session as follows: + +```python +# Launch the graph in a session that allows soft device placement and +# logs the placement decisions. +sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True, + log_device_placement=True)) +``` + +- - - + +#### `tf.Session.__init__(target='', graph=None, config=None)` {#Session.__init__} + +Creates a new TensorFlow session. + +If no `graph` argument is specified when constructing the session, +the default graph will be launched in the session. If you are +using more than one graph (created with `tf.Graph()` in the same +process, you will have to use different sessions for each graph, +but each graph can be used in multiple sessions. In this case, it +is often clearer to pass the graph to be launched explicitly to +the session constructor. + +##### Args: + + +* `target`: (Optional.) The execution engine to connect to. + Defaults to using an in-process engine. At present, no value + other than the empty string is supported. +* `graph`: (Optional.) The `Graph` to be launched (described above). +* `config`: (Optional.) A [`ConfigProto`](https://www.tensorflow.org/code/tensorflow/core/protobuf/config.proto) + protocol buffer with configuration options for the session. + + +- - - + +#### `tf.Session.run(fetches, feed_dict=None, options=None, run_metadata=None)` {#Session.run} + +Runs the operations and evaluates the tensors in `fetches`. + +This method runs one "step" of TensorFlow computation, by +running the necessary graph fragment to execute every `Operation` +and evaluate every `Tensor` in `fetches`, substituting the values in +`feed_dict` for the corresponding input values. + +The `fetches` argument may be a list of graph elements or a single +graph element, and these determine the return value of this +method. A graph element can be one of the following types: + +* If the *i*th element of `fetches` is an + [`Operation`](../../api_docs/python/framework.md#Operation), the *i*th + return value will be `None`. +* If the *i*th element of `fetches` is a + [`Tensor`](../../api_docs/python/framework.md#Tensor), the *i*th return + value will be a numpy ndarray containing the value of that tensor. +* If the *i*th element of `fetches` is a + [`SparseTensor`](../../api_docs/python/sparse_ops.md#SparseTensor), + the *i*th return value will be a + [`SparseTensorValue`](../../api_docs/python/sparse_ops.md#SparseTensorValue) + containing the value of that sparse tensor. +* If the *i*th element of `fetches` is produced by a `get_tensor_handle` op, + the *i*th return value will be a numpy ndarray containing the handle of + that tensor. + +The optional `feed_dict` argument allows the caller to override +the value of tensors in the graph. Each key in `feed_dict` can be +one of the following types: + +* If the key is a [`Tensor`](../../api_docs/python/framework.md#Tensor), the + value may be a Python scalar, string, list, or numpy ndarray + that can be converted to the same `dtype` as that + tensor. Additionally, if the key is a + [placeholder](../../api_docs/python/io_ops.md#placeholder), the shape of + the value will be checked for compatibility with the placeholder. +* If the key is a + [`SparseTensor`](../../api_docs/python/sparse_ops.md#SparseTensor), + the value should be a + [`SparseTensorValue`](../../api_docs/python/sparse_ops.md#SparseTensorValue). + +Each value in `feed_dict` must be convertible to a numpy array of the dtype +of the corresponding key. + +The optional `options` argument expects a [`RunOptions`] proto. The options +allow controlling the behavior of this particular step (e.g. turning tracing +on). + +The optional `run_metadata` argument expects a [`RunMetadata`] proto. When +appropriate, the non-Tensor output of this step will be collected there. For +example, when users turn on tracing in `options`, the profiled info will be +collected into this argument and passed back. + +##### Args: + + +* `fetches`: A single graph element, or a list of graph elements + (described above). +* `feed_dict`: A dictionary that maps graph elements to values + (described above). +* `options`: A [`RunOptions`] protocol buffer +* `run_metadata`: A [`RunMetadata`] protocol buffer + +##### Returns: + + Either a single value if `fetches` is a single graph element, or + a list of values if `fetches` is a list (described above). + +##### Raises: + + +* `RuntimeError`: If this `Session` is in an invalid state (e.g. has been + closed). +* `TypeError`: If `fetches` or `feed_dict` keys are of an inappropriate type. +* `ValueError`: If `fetches` or `feed_dict` keys are invalid or refer to a + `Tensor` that doesn't exist. + + +- - - + +#### `tf.Session.close()` {#Session.close} + +Closes this session. + +Calling this method frees all resources associated with the session. + +##### Raises: + + tf.errors.OpError: Or one of its subclasses if an error occurs while + closing the TensorFlow session. + + + +- - - + +#### `tf.Session.graph` {#Session.graph} + +The graph that was launched in this session. + + + +- - - + +#### `tf.Session.as_default()` {#Session.as_default} + +Returns a context manager that makes this object the default session. + +Use with the `with` keyword to specify that calls to +[`Operation.run()`](../../api_docs/python/framework.md#Operation.run) or +[`Tensor.run()`](../../api_docs/python/framework.md#Tensor.run) should be +executed in this session. + +```python +c = tf.constant(..) +sess = tf.Session() + +with sess.as_default(): + assert tf.get_default_session() is sess + print(c.eval()) +``` + +To get the current default session, use +[`tf.get_default_session()`](#get_default_session). + + +*N.B.* The `as_default` context manager *does not* close the +session when you exit the context, and you must close the session +explicitly. + +```python +c = tf.constant(...) +sess = tf.Session() +with sess.as_default(): + print(c.eval()) +# ... +with sess.as_default(): + print(c.eval()) + +sess.close() +``` + +Alternatively, you can use `with tf.Session():` to create a +session that is automatically closed on exiting the context, +including when an uncaught exception is raised. + +*N.B.* The default graph is a property of the current thread. If you +create a new thread, and wish to use the default session in that +thread, you must explicitly add a `with sess.as_default():` in that +thread's function. + +##### Returns: + + A context manager using this session as the default session. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.SparseTensor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.SparseTensor.md new file mode 100644 index 0000000000..a999b3862f --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.SparseTensor.md @@ -0,0 +1,143 @@ +Represents a sparse tensor. + +Tensorflow represents a sparse tensor as three separate dense tensors: +`indices`, `values`, and `shape`. In Python, the three tensors are +collected into a `SparseTensor` class for ease of use. If you have separate +`indices`, `values`, and `shape` tensors, wrap them in a `SparseTensor` +object before passing to the ops below. + +Concretely, the sparse tensor `SparseTensor(indices, values, shape)` is + +* `indices`: A 2-D int64 tensor of shape `[N, ndims]`. +* `values`: A 1-D tensor of any type and shape `[N]`. +* `shape`: A 1-D int64 tensor of shape `[ndims]`. + +where `N` and `ndims` are the number of values, and number of dimensions in +the `SparseTensor` respectively. + +The corresponding dense tensor satisfies + +```python +dense.shape = shape +dense[tuple(indices[i])] = values[i] +``` + +By convention, `indices` should be sorted in row-major order (or equivalently +lexicographic order on the tuples `indices[i]`). This is not enforced when +`SparseTensor` objects are constructed, but most ops assume correct ordering. +If the ordering of sparse tensor `st` is wrong, a fixed version can be +obtained by calling `tf.sparse_reorder(st)`. + +Example: The sparse tensor + +```python +SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], shape=[3, 4]) +``` + +represents the dense tensor + +```python +[[1, 0, 0, 0] + [0, 0, 2, 0] + [0, 0, 0, 0]] +``` + +- - - + +#### `tf.SparseTensor.__init__(indices, values, shape)` {#SparseTensor.__init__} + +Creates a `SparseTensor`. + +##### Args: + + +* `indices`: A 2-D int64 tensor of shape `[N, ndims]`. +* `values`: A 1-D tensor of any type and shape `[N]`. +* `shape`: A 1-D int64 tensor of shape `[ndims]`. + +##### Returns: + + A `SparseTensor` + + +- - - + +#### `tf.SparseTensor.indices` {#SparseTensor.indices} + +The indices of non-zero values in the represented dense tensor. + +##### Returns: + + A 2-D Tensor of int64 with shape `[N, ndims]`, where `N` is the + number of non-zero values in the tensor, and `ndims` is the rank. + + +- - - + +#### `tf.SparseTensor.values` {#SparseTensor.values} + +The non-zero values in the represented dense tensor. + +##### Returns: + + A 1-D Tensor of any data type. + + +- - - + +#### `tf.SparseTensor.dtype` {#SparseTensor.dtype} + +The `DType` of elements in this tensor. + + +- - - + +#### `tf.SparseTensor.shape` {#SparseTensor.shape} + +A 1-D Tensor of int64 representing the shape of the dense tensor. + + +- - - + +#### `tf.SparseTensor.graph` {#SparseTensor.graph} + +The `Graph` that contains the index, value, and shape tensors. + + + +#### Other Methods +- - - + +#### `tf.SparseTensor.eval(feed_dict=None, session=None)` {#SparseTensor.eval} + +Evaluates this sparse tensor in a `Session`. + +Calling this method will execute all preceding operations that +produce the inputs needed for the operation that produces this +tensor. + +*N.B.* Before invoking `SparseTensor.eval()`, its graph must have been +launched in a session, and either a default session must be +available, or `session` must be specified explicitly. + +##### Args: + + +* `feed_dict`: A dictionary that maps `Tensor` objects to feed values. + See [`Session.run()`](../../api_docs/python/client.md#Session.run) for a + description of the valid feed values. +* `session`: (Optional.) The `Session` to be used to evaluate this sparse + tensor. If none, the default session will be used. + +##### Returns: + + A `SparseTensorValue` object. + + +- - - + +#### `tf.SparseTensor.from_value(cls, sparse_tensor_value)` {#SparseTensor.from_value} + + + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.TFRecordReader.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.TFRecordReader.md deleted file mode 100644 index 31c6ffacfb..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.TFRecordReader.md +++ /dev/null @@ -1,145 +0,0 @@ -A Reader that outputs the records from a TFRecords file. - -See ReaderBase for supported methods. -- - - - -#### `tf.TFRecordReader.__init__(name=None)` {#TFRecordReader.__init__} - -Create a TFRecordReader. - -##### Args: - - -* `name`: A name for the operation (optional). - - -- - - - -#### `tf.TFRecordReader.num_records_produced(name=None)` {#TFRecordReader.num_records_produced} - -Returns the number of records this reader has produced. - -This is the same as the number of Read executions that have -succeeded. - -##### Args: - - -* `name`: A name for the operation (optional). - -##### Returns: - - An int64 Tensor. - - -- - - - -#### `tf.TFRecordReader.num_work_units_completed(name=None)` {#TFRecordReader.num_work_units_completed} - -Returns the number of work units this reader has finished processing. - -##### Args: - - -* `name`: A name for the operation (optional). - -##### Returns: - - An int64 Tensor. - - -- - - - -#### `tf.TFRecordReader.read(queue, name=None)` {#TFRecordReader.read} - -Returns the next record (key, value pair) produced by a reader. - -Will dequeue a work unit from queue if necessary (e.g. when the -Reader needs to start reading from a new file since it has -finished with the previous file). - -##### Args: - - -* `queue`: A Queue or a mutable string Tensor representing a handle - to a Queue, with string work items. -* `name`: A name for the operation (optional). - -##### Returns: - - A tuple of Tensors (key, value). - -* `key`: A string scalar Tensor. -* `value`: A string scalar Tensor. - - -- - - - -#### `tf.TFRecordReader.reader_ref` {#TFRecordReader.reader_ref} - -Op that implements the reader. - - -- - - - -#### `tf.TFRecordReader.reset(name=None)` {#TFRecordReader.reset} - -Restore a reader to its initial clean state. - -##### Args: - - -* `name`: A name for the operation (optional). - -##### Returns: - - The created Operation. - - -- - - - -#### `tf.TFRecordReader.restore_state(state, name=None)` {#TFRecordReader.restore_state} - -Restore a reader to a previously saved state. - -Not all Readers support being restored, so this can produce an -Unimplemented error. - -##### Args: - - -* `state`: A string Tensor. - Result of a SerializeState of a Reader with matching type. -* `name`: A name for the operation (optional). - -##### Returns: - - The created Operation. - - -- - - - -#### `tf.TFRecordReader.serialize_state(name=None)` {#TFRecordReader.serialize_state} - -Produce a string tensor that encodes the state of a reader. - -Not all Readers support being serialized, so this can produce an -Unimplemented error. - -##### Args: - - -* `name`: A name for the operation (optional). - -##### Returns: - - A string Tensor. - - -- - - - -#### `tf.TFRecordReader.supports_serialize` {#TFRecordReader.supports_serialize} - -Whether the Reader implementation can serialize its state. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.Variable.from_proto.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.Variable.from_proto.md deleted file mode 100644 index 5b10d329bc..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.Variable.from_proto.md +++ /dev/null @@ -1,4 +0,0 @@ -#### `tf.Variable.from_proto(variable_def)` {#Variable.from_proto} - -Returns a `Variable` object created from `variable_def`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.Variable.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.Variable.md deleted file mode 100644 index b300fac583..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.Variable.md +++ /dev/null @@ -1,460 +0,0 @@ -See the [Variables How To](../../how_tos/variables/index.md) for a high -level overview. - -A variable maintains state in the graph across calls to `run()`. You add a -variable to the graph by constructing an instance of the class `Variable`. - -The `Variable()` constructor requires an initial value for the variable, -which can be a `Tensor` of any type and shape. The initial value defines the -type and shape of the variable. After construction, the type and shape of -the variable are fixed. The value can be changed using one of the assign -methods. - -If you want to change the shape of a variable later you have to use an -`assign` Op with `validate_shape=False`. - -Just like any `Tensor`, variables created with `Variable()` can be used as -inputs for other Ops in the graph. Additionally, all the operators -overloaded for the `Tensor` class are carried over to variables, so you can -also add nodes to the graph by just doing arithmetic on variables. - -```python -import tensorflow as tf - -# Create a variable. -w = tf.Variable(, name=) - -# Use the variable in the graph like any Tensor. -y = tf.matmul(w, ...another variable or tensor...) - -# The overloaded operators are available too. -z = tf.sigmoid(w + y) - -# Assign a new value to the variable with `assign()` or a related method. -w.assign(w + 1.0) -w.assign_add(1.0) -``` - -When you launch the graph, variables have to be explicitly initialized before -you can run Ops that use their value. You can initialize a variable by -running its *initializer op*, restoring the variable from a save file, or -simply running an `assign` Op that assigns a value to the variable. In fact, -the variable *initializer op* is just an `assign` Op that assigns the -variable's initial value to the variable itself. - -```python -# Launch the graph in a session. -with tf.Session() as sess: - # Run the variable initializer. - sess.run(w.initializer) - # ...you now can run ops that use the value of 'w'... -``` - -The most common initialization pattern is to use the convenience function -`initialize_all_variables()` to add an Op to the graph that initializes -all the variables. You then run that Op after launching the graph. - -```python -# Add an Op to initialize all variables. -init_op = tf.initialize_all_variables() - -# Launch the graph in a session. -with tf.Session() as sess: - # Run the Op that initializes all variables. - sess.run(init_op) - # ...you can now run any Op that uses variable values... -``` - -If you need to create a variable with an initial value dependent on another -variable, use the other variable's `initialized_value()`. This ensures that -variables are initialized in the right order. - -All variables are automatically collected in the graph where they are -created. By default, the constructor adds the new variable to the graph -collection `GraphKeys.VARIABLES`. The convenience function -`all_variables()` returns the contents of that collection. - -When building a machine learning model it is often convenient to distinguish -betwen variables holding the trainable model parameters and other variables -such as a `global step` variable used to count training steps. To make this -easier, the variable constructor supports a `trainable=` parameter. If -`True`, the new variable is also added to the graph collection -`GraphKeys.TRAINABLE_VARIABLES`. The convenience function -`trainable_variables()` returns the contents of this collection. The -various `Optimizer` classes use this collection as the default list of -variables to optimize. - - -Creating a variable. - -- - - - -#### `tf.Variable.__init__(initial_value=None, trainable=True, collections=None, validate_shape=True, caching_device=None, name=None, variable_def=None, dtype=None)` {#Variable.__init__} - -Creates a new variable with value `initial_value`. - -The new variable is added to the graph collections listed in `collections`, -which defaults to `[GraphKeys.VARIABLES]`. - -If `trainable` is `True` the variable is also added to the graph collection -`GraphKeys.TRAINABLE_VARIABLES`. - -This constructor creates both a `variable` Op and an `assign` Op to set the -variable to its initial value. - -##### Args: - - -* `initial_value`: A `Tensor`, or Python object convertible to a `Tensor`, - which is the initial value for the Variable. The initial value must have - a shape specified unless `validate_shape` is set to False. Can also be a - callable with no argument that returns the initial value when called. In - that case, `dtype` must be specified. (Note that initializer functions - from init_ops.py must first be bound to a shape before being used here.) -* `trainable`: If `True`, the default, also adds the variable to the graph - collection `GraphKeys.TRAINABLE_VARIABLES`. This collection is used as - the default list of variables to use by the `Optimizer` classes. -* `collections`: List of graph collections keys. The new variable is added to - these collections. Defaults to `[GraphKeys.VARIABLES]`. -* `validate_shape`: If `False`, allows the variable to be initialized with a - value of unknown shape. If `True`, the default, the shape of - `initial_value` must be known. -* `caching_device`: Optional device string describing where the Variable - should be cached for reading. Defaults to the Variable's device. - If not `None`, caches on another device. Typical use is to cache - on the device where the Ops using the Variable reside, to deduplicate - copying through `Switch` and other conditional statements. -* `name`: Optional name for the variable. Defaults to `'Variable'` and gets - uniquified automatically. -* `variable_def`: `VariableDef` protocol buffer. If not `None`, recreates - the Variable object with its contents. `variable_def` and the other - arguments are mutually exclusive. -* `dtype`: If set, initial_value will be converted to the given type. - If `None`, either the datatype will be kept (if `initial_value` is - a Tensor), or `convert_to_tensor` will decide. - -##### Returns: - - A Variable. - -##### Raises: - - -* `ValueError`: If both `variable_def` and initial_value are specified. -* `ValueError`: If the initial value is not specified, or does not have a - shape and `validate_shape` is `True`. - - -- - - - -#### `tf.Variable.initialized_value()` {#Variable.initialized_value} - -Returns the value of the initialized variable. - -You should use this instead of the variable itself to initialize another -variable with a value that depends on the value of this variable. - -```python -# Initialize 'v' with a random tensor. -v = tf.Variable(tf.truncated_normal([10, 40])) -# Use `initialized_value` to guarantee that `v` has been -# initialized before its value is used to initialize `w`. -# The random values are picked only once. -w = tf.Variable(v.initialized_value() * 2.0) -``` - -##### Returns: - - A `Tensor` holding the value of this variable after its initializer - has run. - - - -Changing a variable value. - -- - - - -#### `tf.Variable.assign(value, use_locking=False)` {#Variable.assign} - -Assigns a new value to the variable. - -This is essentially a shortcut for `assign(self, value)`. - -##### Args: - - -* `value`: A `Tensor`. The new value for this variable. -* `use_locking`: If `True`, use locking during the assignment. - -##### Returns: - - A `Tensor` that will hold the new value of this variable after - the assignment has completed. - - -- - - - -#### `tf.Variable.assign_add(delta, use_locking=False)` {#Variable.assign_add} - -Adds a value to this variable. - - This is essentially a shortcut for `assign_add(self, delta)`. - -##### Args: - - -* `delta`: A `Tensor`. The value to add to this variable. -* `use_locking`: If `True`, use locking during the operation. - -##### Returns: - - A `Tensor` that will hold the new value of this variable after - the addition has completed. - - -- - - - -#### `tf.Variable.assign_sub(delta, use_locking=False)` {#Variable.assign_sub} - -Subtracts a value from this variable. - -This is essentially a shortcut for `assign_sub(self, delta)`. - -##### Args: - - -* `delta`: A `Tensor`. The value to subtract from this variable. -* `use_locking`: If `True`, use locking during the operation. - -##### Returns: - - A `Tensor` that will hold the new value of this variable after - the subtraction has completed. - - -- - - - -#### `tf.Variable.scatter_sub(sparse_delta, use_locking=False)` {#Variable.scatter_sub} - -Subtracts `IndexedSlices` from this variable. - -This is essentially a shortcut for `scatter_sub(self, sparse_delta.indices, -sparse_delta.values)`. - -##### Args: - - -* `sparse_delta`: `IndexedSlices` to be subtracted from this variable. -* `use_locking`: If `True`, use locking during the operation. - -##### Returns: - - A `Tensor` that will hold the new value of this variable after - the scattered subtraction has completed. - -##### Raises: - - -* `ValueError`: if `sparse_delta` is not an `IndexedSlices`. - - -- - - - -#### `tf.Variable.count_up_to(limit)` {#Variable.count_up_to} - -Increments this variable until it reaches `limit`. - -When that Op is run it tries to increment the variable by `1`. If -incrementing the variable would bring it above `limit` then the Op raises -the exception `OutOfRangeError`. - -If no error is raised, the Op outputs the value of the variable before -the increment. - -This is essentially a shortcut for `count_up_to(self, limit)`. - -##### Args: - - -* `limit`: value at which incrementing the variable raises an error. - -##### Returns: - - A `Tensor` that will hold the variable value before the increment. If no - other Op modifies this variable, the values produced will all be - distinct. - - - -- - - - -#### `tf.Variable.eval(session=None)` {#Variable.eval} - -In a session, computes and returns the value of this variable. - -This is not a graph construction method, it does not add ops to the graph. - -This convenience method requires a session where the graph containing this -variable has been launched. If no session is passed, the default session is -used. See the [Session class](../../api_docs/python/client.md#Session) for -more information on launching a graph and on sessions. - -```python -v = tf.Variable([1, 2]) -init = tf.initialize_all_variables() - -with tf.Session() as sess: - sess.run(init) - # Usage passing the session explicitly. - print(v.eval(sess)) - # Usage with the default session. The 'with' block - # above makes 'sess' the default session. - print(v.eval()) -``` - -##### Args: - - -* `session`: The session to use to evaluate this variable. If - none, the default session is used. - -##### Returns: - - A numpy `ndarray` with a copy of the value of this variable. - - - -Properties. - -- - - - -#### `tf.Variable.name` {#Variable.name} - -The name of this variable. - - -- - - - -#### `tf.Variable.dtype` {#Variable.dtype} - -The `DType` of this variable. - - -- - - - -#### `tf.Variable.get_shape()` {#Variable.get_shape} - -The `TensorShape` of this variable. - -##### Returns: - - A `TensorShape`. - - -- - - - -#### `tf.Variable.device` {#Variable.device} - -The device of this variable. - - -- - - - -#### `tf.Variable.initializer` {#Variable.initializer} - -The initializer operation for this variable. - - -- - - - -#### `tf.Variable.graph` {#Variable.graph} - -The `Graph` of this variable. - - -- - - - -#### `tf.Variable.op` {#Variable.op} - -The `Operation` of this variable. - - - -#### Other Methods -- - - - -#### `tf.Variable.from_proto(variable_def)` {#Variable.from_proto} - -Returns a `Variable` object created from `variable_def`. - - -- - - - -#### `tf.Variable.initial_value` {#Variable.initial_value} - -Returns the Tensor used as the initial value for the variable. - -Note that this is different from `initialized_value()` which runs -the op that initializes the variable before returning its value. -This method returns the tensor that is used by the op that initializes -the variable. - -##### Returns: - - A `Tensor`. - - -- - - - -#### `tf.Variable.ref()` {#Variable.ref} - -Returns a reference to this variable. - -You usually do not need to call this method as all ops that need a reference -to the variable call it automatically. - -Returns is a `Tensor` which holds a reference to the variable. You can -assign a new value to the variable by passing the tensor to an assign op. -See [`value()`](#Variable.value) if you want to get the value of the -variable. - -##### Returns: - - A `Tensor` that is a reference to the variable. - - -- - - - -#### `tf.Variable.to_proto()` {#Variable.to_proto} - -Converts a `Variable` to a `VariableDef` protocol buffer. - -##### Returns: - - A `VariableDef` protocol buffer. - - -- - - - -#### `tf.Variable.value()` {#Variable.value} - -Returns the last snapshot of this variable. - -You usually do not need to call this method as all ops that need the value -of the variable call it automatically through a `convert_to_tensor()` call. - -Returns a `Tensor` which holds the value of the variable. You can not -assign a new value to this tensor as it is not a reference to the variable. -See [`ref()`](#Variable.ref) if you want to get a reference to the -variable. - -To avoid copies, if the consumer of the returned value is on the same device -as the variable, this actually returns the live value of the variable, not -a copy. Updates to the variable are seen by the consumer. If the consumer -is on a different device it will get a copy of the variable. - -##### Returns: - - A `Tensor` containing the value of the variable. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.accumulate_n.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.accumulate_n.md new file mode 100644 index 0000000000..a85d0d7f87 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.accumulate_n.md @@ -0,0 +1,37 @@ +### `tf.accumulate_n(inputs, shape=None, tensor_dtype=None, name=None)` {#accumulate_n} + +Returns the element-wise sum of a list of tensors. + +Optionally, pass `shape` and `tensor_dtype` for shape and type checking, +otherwise, these are inferred. + +For example: + +```python +# tensor 'a' is [[1, 2], [3, 4]] +# tensor `b` is [[5, 0], [0, 6]] +tf.accumulate_n([a, b, a]) ==> [[7, 4], [6, 14]] + +# Explicitly pass shape and type +tf.accumulate_n([a, b, a], shape=[2, 2], tensor_dtype=tf.int32) + ==> [[7, 4], [6, 14]] +``` + +##### Args: + + +* `inputs`: A list of `Tensor` objects, each with same shape and type. +* `shape`: Shape of elements of `inputs`. +* `tensor_dtype`: The type of `inputs`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of same shape and type as the elements of `inputs`. + +##### Raises: + + +* `ValueError`: If `inputs` don't all have same shape and dtype or the shape + cannot be inferred. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.assert_equal.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.assert_equal.md new file mode 100644 index 0000000000..ea4fd3a1fd --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.assert_equal.md @@ -0,0 +1,35 @@ +### `tf.assert_equal(x, y, data=None, summarize=None, name=None)` {#assert_equal} + +Assert the condition `x == y` holds element-wise. + +Example of adding a dependency to an operation: + +```python +with tf.control_dependencies([tf.assert_equal(x, y)]): + output = tf.reduce_sum(x) +``` + +Example of adding dependency to the tensor being checked: + +```python +x = tf.with_dependencies([tf.assert_equal(x, y)], x) +``` + +This condition holds if for every pair of (possibly broadcast) elements +`x[i]`, `y[i]`, we have `x[i] == y[i]`. +If both `x` and `y` are empty, this is trivially satisfied. + +##### Args: + + +* `x`: Numeric `Tensor`. +* `y`: Numeric `Tensor`, same dtype as and broadcastable to `x`. +* `data`: The tensors to print out if the condition is False. Defaults to + error message and first few entries of `x`, `y`. +* `summarize`: Print this many entries of each tensor. +* `name`: A name for this operation (optional). Defaults to "assert_equal". + +##### Returns: + + Op that raises `InvalidArgumentError` if `x == y` is False. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.assert_non_positive.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.assert_non_positive.md new file mode 100644 index 0000000000..83eb36a95c --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.assert_non_positive.md @@ -0,0 +1,34 @@ +### `tf.assert_non_positive(x, data=None, summarize=None, name=None)` {#assert_non_positive} + +Assert the condition `x <= 0` holds element-wise. + +Example of adding a dependency to an operation: + +```python +with tf.control_dependencies([tf.assert_non_positive(x)]): + output = tf.reduce_sum(x) +``` + +Example of adding dependency to the tensor being checked: + +```python +x = tf.with_dependencies([tf.assert_non_positive(x)], x) +``` + +Non-positive means, for every element `x[i]` of `x`, we have `x[i] <= 0`. +If `x` is empty this is trivially satisfied. + +##### Args: + + +* `x`: Numeric `Tensor`. +* `data`: The tensors to print out if the condition is False. Defaults to + error message and first few entries of `x`. +* `summarize`: Print this many entries of each tensor. +* `name`: A name for this operation (optional). + Defaults to "assert_non_positive". + +##### Returns: + + Op raising `InvalidArgumentError` unless `x` is all non-positive. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.assert_positive.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.assert_positive.md new file mode 100644 index 0000000000..8b727d6215 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.assert_positive.md @@ -0,0 +1,33 @@ +### `tf.assert_positive(x, data=None, summarize=None, name=None)` {#assert_positive} + +Assert the condition `x > 0` holds element-wise. + +Example of adding a dependency to an operation: + +```python +with tf.control_dependencies([tf.assert_positive(x)]): + output = tf.reduce_sum(x) +``` + +Example of adding dependency to the tensor being checked: + +```python +x = tf.with_dependencies([tf.assert_positive(x)], x) +``` + +Positive means, for every element `x[i]` of `x`, we have `x[i] > 0`. +If `x` is empty this is trivially satisfied. + +##### Args: + + +* `x`: Numeric `Tensor`. +* `data`: The tensors to print out if the condition is False. Defaults to + error message and first few entries of `x`. +* `summarize`: Print this many entries of each tensor. +* `name`: A name for this operation (optional). Defaults to "assert_positive". + +##### Returns: + + Op raising `InvalidArgumentError` unless `x` is all positive. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.batch_matmul.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.batch_matmul.md new file mode 100644 index 0000000000..a4764435b8 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.batch_matmul.md @@ -0,0 +1,41 @@ +### `tf.batch_matmul(x, y, adj_x=None, adj_y=None, name=None)` {#batch_matmul} + +Multiplies slices of two tensors in batches. + +Multiplies all slices of `Tensor` `x` and `y` (each slice can be +viewed as an element of a batch), and arranges the individual results +in a single output tensor of the same batch size. Each of the +individual slices can optionally be adjointed (to adjoint a matrix +means to transpose and conjugate it) before multiplication by setting +the `adj_x` or `adj_y` flag to `True`, which are by default `False`. + +The input tensors `x` and `y` are 3-D or higher with shape `[..., r_x, c_x]` +and `[..., r_y, c_y]`. + +The output tensor is 3-D or higher with shape `[..., r_o, c_o]`, where: + + r_o = c_x if adj_x else r_x + c_o = r_y if adj_y else c_y + +It is computed as: + + output[..., :, :] = matrix(x[..., :, :]) * matrix(y[..., :, :]) + +##### Args: + + +* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `complex64`, `complex128`. + 3-D or higher with shape `[..., r_x, c_x]`. +* `y`: A `Tensor`. Must have the same type as `x`. + 3-D or higher with shape `[..., r_y, c_y]`. +* `adj_x`: An optional `bool`. Defaults to `False`. + If `True`, adjoint the slices of `x`. Defaults to `False`. +* `adj_y`: An optional `bool`. Defaults to `False`. + If `True`, adjoint the slices of `y`. Defaults to `False`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `x`. + 3-D or higher with shape `[..., r_o, c_o]` + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.clip_by_norm.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.clip_by_norm.md new file mode 100644 index 0000000000..a393375986 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.clip_by_norm.md @@ -0,0 +1,29 @@ +### `tf.clip_by_norm(t, clip_norm, name=None)` {#clip_by_norm} + +Clips tensor values to a maximum L2-norm. + +Given a tensor `t`, and a maximum clip value `clip_norm`, this operation +normalizes `t` so that its L2-norm is less than or equal to `clip_norm`. +Specifically, if the L2-norm is already less than or equal to `clip_norm`, +then `t` is not modified. If the L2-norm is greater than `clip_norm`, then +this operation returns a tensor of the same type and shape as `t` with its +values set to: + +`t * clip_norm / l2norm(t)` + +In this case, the L2-norm of the output tensor is `clip_norm`. + +This operation is typically used to clip gradients before applying them with +an optimizer. + +##### Args: + + +* `t`: A `Tensor`. +* `clip_norm`: A 0-D (scalar) `Tensor` > 0. A maximum clipping value. +* `name`: A name for the operation (optional). + +##### Returns: + + A clipped `Tensor`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.concat.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.concat.md deleted file mode 100644 index c54a7503da..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.concat.md +++ /dev/null @@ -1,45 +0,0 @@ -### `tf.concat(concat_dim, values, name='concat')` {#concat} - -Concatenates tensors along one dimension. - -Concatenates the list of tensors `values` along dimension `concat_dim`. If -`values[i].shape = [D0, D1, ... Dconcat_dim(i), ...Dn]`, the concatenated -result has shape - - [D0, D1, ... Rconcat_dim, ...Dn] - -where - - Rconcat_dim = sum(Dconcat_dim(i)) - -That is, the data from the input tensors is joined along the `concat_dim` -dimension. - -The number of dimensions of the input tensors must match, and all dimensions -except `concat_dim` must be equal. - -For example: - -```python -t1 = [[1, 2, 3], [4, 5, 6]] -t2 = [[7, 8, 9], [10, 11, 12]] -tf.concat(0, [t1, t2]) ==> [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]] -tf.concat(1, [t1, t2]) ==> [[1, 2, 3, 7, 8, 9], [4, 5, 6, 10, 11, 12]] - -# tensor t3 with shape [2, 3] -# tensor t4 with shape [2, 3] -tf.shape(tf.concat(0, [t3, t4])) ==> [4, 3] -tf.shape(tf.concat(1, [t3, t4])) ==> [2, 6] -``` - -##### Args: - - -* `concat_dim`: 0-D `int32` `Tensor`. Dimension along which to concatenate. -* `values`: A list of `Tensor` objects or a single `Tensor`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` resulting from concatenation of the input tensors. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.copy_graph.copy_variable_to_graph.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.copy_graph.copy_variable_to_graph.md new file mode 100644 index 0000000000..85e336a29b --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.copy_graph.copy_variable_to_graph.md @@ -0,0 +1,20 @@ +### `tf.contrib.copy_graph.copy_variable_to_graph(org_instance, to_graph, scope='')` {#copy_variable_to_graph} + +Given a `Variable` instance from one `Graph`, initializes and returns +a copy of it from another `Graph`, under the specified scope +(default `""`). + +Args: +org_instance: A `Variable` from some `Graph`. +to_graph: The `Graph` to copy the `Variable` to. +scope: A scope for the new `Variable` (default `""`). + +##### Returns: + + The copied `Variable` from `to_graph`. + +##### Raises: + + +* `TypeError`: If `org_instance` is not a `Variable`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.distributions.Chi2.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.distributions.Chi2.md deleted file mode 100644 index 61ca5fb9d3..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.distributions.Chi2.md +++ /dev/null @@ -1,260 +0,0 @@ -The Chi2 distribution with degrees of freedom df. - -The PDF of this distribution is: - -```pdf(x) = (x^(df/2 - 1)e^(-x/2))/(2^(k/2)Gamma(k/2)), x > 0``` - -Note that the Chi2 distribution is a special case of the Gamma distribution, -with Chi2(df) = Gamma(df/2, 1/2). -- - - - -#### `tf.contrib.distributions.Chi2.__init__(df, name='Chi2')` {#Chi2.__init__} - - - - -- - - - -#### `tf.contrib.distributions.Chi2.alpha` {#Chi2.alpha} - -Shape parameter. - - -- - - - -#### `tf.contrib.distributions.Chi2.batch_shape(name='batch_shape')` {#Chi2.batch_shape} - -Batch dimensions of this instance as a 1-D int32 `Tensor`. - -The product of the dimensions of the `batch_shape` is the number of -independent distributions of this kind the instance represents. - -##### Args: - - -* `name`: name to give to the op - -##### Returns: - - `Tensor` `batch_shape` - - -- - - - -#### `tf.contrib.distributions.Chi2.beta` {#Chi2.beta} - -Inverse scale parameter. - - -- - - - -#### `tf.contrib.distributions.Chi2.cdf(x, name='cdf')` {#Chi2.cdf} - -CDF of observations `x` under these Gamma distribution(s). - -##### Args: - - -* `x`: tensor of dtype `dtype`, must be broadcastable with `alpha` and `beta`. -* `name`: The name to give this op. - -##### Returns: - - -* `cdf`: tensor of dtype `dtype`, the CDFs of `x`. - - -- - - - -#### `tf.contrib.distributions.Chi2.df` {#Chi2.df} - - - - -- - - - -#### `tf.contrib.distributions.Chi2.dtype` {#Chi2.dtype} - -dtype of samples from this distribution. - - -- - - - -#### `tf.contrib.distributions.Chi2.entropy(name='entropy')` {#Chi2.entropy} - -The entropy of Gamma distribution(s). - -This is defined to be - -``` -entropy = alpha - log(beta) + log(Gamma(alpha)) - + (1-alpha)digamma(alpha) -``` - -where digamma(alpha) is the digamma function. - -##### Args: - - -* `name`: The name to give this op. - -##### Returns: - - -* `entropy`: tensor of dtype `dtype`, the entropy. - - -- - - - -#### `tf.contrib.distributions.Chi2.event_shape(name='event_shape')` {#Chi2.event_shape} - -Shape of a sample from a single distribution as a 1-D int32 `Tensor`. - -##### Args: - - -* `name`: name to give to the op - -##### Returns: - - `Tensor` `event_shape` - - -- - - - -#### `tf.contrib.distributions.Chi2.get_batch_shape()` {#Chi2.get_batch_shape} - -`TensorShape` available at graph construction time. - -Same meaning as `batch_shape`. May be only partially defined. - -##### Returns: - - `TensorShape` object. - - -- - - - -#### `tf.contrib.distributions.Chi2.get_event_shape()` {#Chi2.get_event_shape} - -`TensorShape` available at graph construction time. - -Same meaning as `event_shape`. May be only partially defined. - -##### Returns: - - `TensorShape` object. - - -- - - - -#### `tf.contrib.distributions.Chi2.is_reparameterized` {#Chi2.is_reparameterized} - - - - -- - - - -#### `tf.contrib.distributions.Chi2.log_cdf(x, name='log_cdf')` {#Chi2.log_cdf} - -Log CDF of observations `x` under these Gamma distribution(s). - -##### Args: - - -* `x`: tensor of dtype `dtype`, must be broadcastable with `alpha` and `beta`. -* `name`: The name to give this op. - -##### Returns: - - -* `log_cdf`: tensor of dtype `dtype`, the log-CDFs of `x`. - - -- - - - -#### `tf.contrib.distributions.Chi2.log_pdf(x, name='log_pdf')` {#Chi2.log_pdf} - -Log pdf of observations in `x` under these Gamma distribution(s). - -##### Args: - - -* `x`: tensor of dtype `dtype`, must be broadcastable with `alpha` and `beta`. -* `name`: The name to give this op. - -##### Returns: - - -* `log_pdf`: tensor of dtype `dtype`, the log-PDFs of `x`. - -##### Raises: - - -* `TypeError`: if `x` and `alpha` are different dtypes. - - -- - - - -#### `tf.contrib.distributions.Chi2.mean` {#Chi2.mean} - -Mean of each batch member. - - -- - - - -#### `tf.contrib.distributions.Chi2.name` {#Chi2.name} - -Name to prepend to all ops. - - -- - - - -#### `tf.contrib.distributions.Chi2.pdf(x, name='pdf')` {#Chi2.pdf} - -Pdf of observations in `x` under these Gamma distribution(s). - -##### Args: - - -* `x`: tensor of dtype `dtype`, must be broadcastable with `alpha` and `beta`. -* `name`: The name to give this op. - -##### Returns: - - -* `pdf`: tensor of dtype `dtype`, the PDFs of `x` - -##### Raises: - - -* `TypeError`: if `x` and `alpha` are different dtypes. - - -- - - - -#### `tf.contrib.distributions.Chi2.sample(n, seed=None, name=None)` {#Chi2.sample} - -Generate `n` samples. - -##### Args: - - -* `n`: scalar. Number of samples to draw from each distribution. -* `seed`: Python integer seed for RNG -* `name`: name to give to the op. - -##### Returns: - - -* `samples`: a `Tensor` of shape `(n,) + self.batch_shape + self.event_shape` - with values of type `self.dtype`. - - -- - - - -#### `tf.contrib.distributions.Chi2.variance` {#Chi2.variance} - -Variance of each batch member. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.distributions.Gamma.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.distributions.Gamma.md deleted file mode 100644 index 5a7bbea7ae..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.distributions.Gamma.md +++ /dev/null @@ -1,284 +0,0 @@ -The `Gamma` distribution with parameter alpha and beta. - -The parameters are the shape and inverse scale parameters alpha, beta. - -The PDF of this distribution is: - -```pdf(x) = (beta^alpha)(x^(alpha-1))e^(-x*beta)/Gamma(alpha), x > 0``` - -and the CDF of this distribution is: - -```cdf(x) = GammaInc(alpha, beta * x) / Gamma(alpha), x > 0``` - -where GammaInc is the incomplete lower Gamma function. - -Examples: - -```python -dist = Gamma(alpha=3.0, beta=2.0) -dist2 = Gamma(alpha=[3.0, 4.0], beta=[2.0, 3.0]) -``` -- - - - -#### `tf.contrib.distributions.Gamma.__init__(alpha, beta, name='Gamma')` {#Gamma.__init__} - -Construct Gamma distributions with parameters `alpha` and `beta`. - -The parameters `alpha` and `beta` must be shaped in a way that supports -broadcasting (e.g. `alpha + beta` is a valid operation). - -##### Args: - - -* `alpha`: `float` or `double` tensor, the shape params of the - distribution(s). - alpha must contain only positive values. -* `beta`: `float` or `double` tensor, the inverse scale params of the - distribution(s). - beta must contain only positive values. -* `name`: The name to prepend to all ops created by this distribution. - -##### Raises: - - -* `TypeError`: if `alpha` and `beta` are different dtypes. - - -- - - - -#### `tf.contrib.distributions.Gamma.alpha` {#Gamma.alpha} - -Shape parameter. - - -- - - - -#### `tf.contrib.distributions.Gamma.batch_shape(name='batch_shape')` {#Gamma.batch_shape} - -Batch dimensions of this instance as a 1-D int32 `Tensor`. - -The product of the dimensions of the `batch_shape` is the number of -independent distributions of this kind the instance represents. - -##### Args: - - -* `name`: name to give to the op - -##### Returns: - - `Tensor` `batch_shape` - - -- - - - -#### `tf.contrib.distributions.Gamma.beta` {#Gamma.beta} - -Inverse scale parameter. - - -- - - - -#### `tf.contrib.distributions.Gamma.cdf(x, name='cdf')` {#Gamma.cdf} - -CDF of observations `x` under these Gamma distribution(s). - -##### Args: - - -* `x`: tensor of dtype `dtype`, must be broadcastable with `alpha` and `beta`. -* `name`: The name to give this op. - -##### Returns: - - -* `cdf`: tensor of dtype `dtype`, the CDFs of `x`. - - -- - - - -#### `tf.contrib.distributions.Gamma.dtype` {#Gamma.dtype} - -dtype of samples from this distribution. - - -- - - - -#### `tf.contrib.distributions.Gamma.entropy(name='entropy')` {#Gamma.entropy} - -The entropy of Gamma distribution(s). - -This is defined to be - -``` -entropy = alpha - log(beta) + log(Gamma(alpha)) - + (1-alpha)digamma(alpha) -``` - -where digamma(alpha) is the digamma function. - -##### Args: - - -* `name`: The name to give this op. - -##### Returns: - - -* `entropy`: tensor of dtype `dtype`, the entropy. - - -- - - - -#### `tf.contrib.distributions.Gamma.event_shape(name='event_shape')` {#Gamma.event_shape} - -Shape of a sample from a single distribution as a 1-D int32 `Tensor`. - -##### Args: - - -* `name`: name to give to the op - -##### Returns: - - `Tensor` `event_shape` - - -- - - - -#### `tf.contrib.distributions.Gamma.get_batch_shape()` {#Gamma.get_batch_shape} - -`TensorShape` available at graph construction time. - -Same meaning as `batch_shape`. May be only partially defined. - -##### Returns: - - `TensorShape` object. - - -- - - - -#### `tf.contrib.distributions.Gamma.get_event_shape()` {#Gamma.get_event_shape} - -`TensorShape` available at graph construction time. - -Same meaning as `event_shape`. May be only partially defined. - -##### Returns: - - `TensorShape` object. - - -- - - - -#### `tf.contrib.distributions.Gamma.is_reparameterized` {#Gamma.is_reparameterized} - - - - -- - - - -#### `tf.contrib.distributions.Gamma.log_cdf(x, name='log_cdf')` {#Gamma.log_cdf} - -Log CDF of observations `x` under these Gamma distribution(s). - -##### Args: - - -* `x`: tensor of dtype `dtype`, must be broadcastable with `alpha` and `beta`. -* `name`: The name to give this op. - -##### Returns: - - -* `log_cdf`: tensor of dtype `dtype`, the log-CDFs of `x`. - - -- - - - -#### `tf.contrib.distributions.Gamma.log_pdf(x, name='log_pdf')` {#Gamma.log_pdf} - -Log pdf of observations in `x` under these Gamma distribution(s). - -##### Args: - - -* `x`: tensor of dtype `dtype`, must be broadcastable with `alpha` and `beta`. -* `name`: The name to give this op. - -##### Returns: - - -* `log_pdf`: tensor of dtype `dtype`, the log-PDFs of `x`. - -##### Raises: - - -* `TypeError`: if `x` and `alpha` are different dtypes. - - -- - - - -#### `tf.contrib.distributions.Gamma.mean` {#Gamma.mean} - -Mean of each batch member. - - -- - - - -#### `tf.contrib.distributions.Gamma.name` {#Gamma.name} - -Name to prepend to all ops. - - -- - - - -#### `tf.contrib.distributions.Gamma.pdf(x, name='pdf')` {#Gamma.pdf} - -Pdf of observations in `x` under these Gamma distribution(s). - -##### Args: - - -* `x`: tensor of dtype `dtype`, must be broadcastable with `alpha` and `beta`. -* `name`: The name to give this op. - -##### Returns: - - -* `pdf`: tensor of dtype `dtype`, the PDFs of `x` - -##### Raises: - - -* `TypeError`: if `x` and `alpha` are different dtypes. - - -- - - - -#### `tf.contrib.distributions.Gamma.sample(n, seed=None, name=None)` {#Gamma.sample} - -Generate `n` samples. - -##### Args: - - -* `n`: scalar. Number of samples to draw from each distribution. -* `seed`: Python integer seed for RNG -* `name`: name to give to the op. - -##### Returns: - - -* `samples`: a `Tensor` of shape `(n,) + self.batch_shape + self.event_shape` - with values of type `self.dtype`. - - -- - - - -#### `tf.contrib.distributions.Gamma.variance` {#Gamma.variance} - -Variance of each batch member. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.convolution2d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.convolution2d.md new file mode 100644 index 0000000000..f296166c9c --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.convolution2d.md @@ -0,0 +1,43 @@ +### `tf.contrib.layers.convolution2d(*args, **kwargs)` {#convolution2d} + +Adds a 2D convolution followed by an optional batch_norm layer. + +`convolution2d` creates a variable called `weights`, representing the +convolutional kernel, that is convolved with the `inputs` to produce a +`Tensor` of activations. If a `normalizer_fn` is provided (such as +`batch_norm`), it is then applied. Otherwise, if `normalizer_fn` is +None and a `biases_initializer` is provided then a `biases` variable would be +created and added the activations. Finally, if `activation_fn` is not `None`, +it is applied to the activations as well. + +##### Args: + + +* `inputs`: a 4-D tensor `[batch_size, height, width, channels]`. +* `num_outputs`: integer, the number of output filters. +* `kernel_size`: a list of length 2 `[kernel_height, kernel_width]` of + of the filters. Can be an int if both values are the same. +* `stride`: a list of length 2 `[stride_height, stride_width]`. + Can be an int if both strides are the same. Note that presently + both strides must have the same value. +* `padding`: one of `VALID` or `SAME`. +* `activation_fn`: activation function. +* `normalizer_fn`: normalization function to use instead of `biases`. If + `normalize_fn` is provided then `biases_initializer` and + `biases_regularizer` are ignored and `biases` are not created nor added. +* `normalizer_params`: normalization function parameters. +* `weights_initializer`: An initializer for the weights. +* `weights_regularizer`: Optional regularizer for the weights. +* `biases_initializer`: An initializer for the biases. If None skip biases. +* `biases_regularizer`: Optional regularizer for the biases. +* `reuse`: whether or not the layer and its variables should be reused. To be + able to reuse the layer scope must be given. +* `variables_collections`: optional list of collections for all the variables or + a dictionay containing a different list of collection per variable. +* `outputs_collections`: collection to add the outputs. +* `scope`: Optional scope for `variable_op_scope`. + +##### Returns: + + a tensor representing the output of the operation. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.l2_regularizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.l2_regularizer.md new file mode 100644 index 0000000000..9c3d06393b --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.l2_regularizer.md @@ -0,0 +1,22 @@ +### `tf.contrib.layers.l2_regularizer(scale)` {#l2_regularizer} + +Returns a function that can be used to apply L2 regularization to weights. + +Small values of L2 can help prevent overfitting the training data. + +##### Args: + + +* `scale`: A scalar multiplier `Tensor`. 0.0 disables the regularizer. + +##### Returns: + + A function with signature `l2(weights, name=None)` that applies L2 + regularization. + +##### Raises: + + +* `ValueError`: If scale is outside of the range [0.0, 1.0] or if scale is not a + float. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.xavier_initializer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.xavier_initializer.md deleted file mode 100644 index 55631e4b05..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.xavier_initializer.md +++ /dev/null @@ -1,29 +0,0 @@ -### `tf.contrib.layers.xavier_initializer(uniform=True, seed=None, dtype=tf.float32)` {#xavier_initializer} - -Returns an initializer performing "Xavier" initialization for weights. - -This function implements the weight initialization from: - -Xavier Glorot and Yoshua Bengio (2010): - Understanding the difficulty of training deep feedforward neural - networks. International conference on artificial intelligence and - statistics. - -This initializer is designed to keep the scale of the gradients roughly the -same in all layers. In uniform distribution this ends up being the range: -`x = sqrt(6. / (in + out)); [-x, x]` and for normal distribution a standard -deviation of `sqrt(3. / (in + out))` is used. - -##### Args: - - -* `uniform`: Whether to use uniform or normal distributed random initialization. -* `seed`: A Python integer. Used to create random seeds. See - [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) - for behavior. -* `dtype`: The data type. Only floating point types are supported. - -##### Returns: - - An initializer for a weight matrix. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.learn.RunConfig.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.learn.RunConfig.md new file mode 100644 index 0000000000..ffdf8703c0 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.learn.RunConfig.md @@ -0,0 +1,47 @@ +This class specifies the specific configurations for the run. + +Parameters: + execution_mode: Runners use this flag to execute different tasks, like + training vs evaluation. 'all' (the default) executes both training and + eval. + master: TensorFlow master. Empty string (the default) for local. + task: Task id of the replica running the training (default: 0). + num_ps_replicas: Number of parameter server tasks to use (default: 0). + training_worker_session_startup_stagger_secs: Seconds to sleep between the + startup of each worker task session (default: 5). + training_worker_max_startup_secs: Max seconds to wait before starting any + worker (default: 60). + eval_delay_secs: Number of seconds between the beginning of each eval run. + If one run takes more than this amount of time, the next run will start + immediately once that run completes (default 60). + eval_steps: Number of steps to run in each eval (default: 100). + num_cores: Number of cores to be used (default: 4). + verbose: Controls the verbosity, possible values: + 0: the algorithm and debug information is muted. + 1: trainer prints the progress. + 2: log device placement is printed. + gpu_memory_fraction: Fraction of GPU memory used by the process on + each GPU uniformly on the same machine. + tf_random_seed: Random seed for TensorFlow initializers. + Setting this value allows consistency between reruns. + keep_checkpoint_max: The maximum number of recent checkpoint files to keep. + As new files are created, older files are deleted. + If None or 0, all checkpoint files are kept. + Defaults to 5 (that is, the 5 most recent checkpoint files are kept.) + keep_checkpoint_every_n_hours: Number of hours between each checkpoint + to be saved. The default value of 10,000 hours effectively disables + the feature. + +Attributes: + tf_master: Tensorflow master. + tf_config: Tensorflow Session Config proto. + tf_random_seed: Tensorflow random seed. + keep_checkpoint_max: Maximum number of checkpoints to keep. + keep_checkpoint_every_n_hours: Number of hours between each checkpoint. +- - - + +#### `tf.contrib.learn.RunConfig.__init__(execution_mode='all', master='', task=0, num_ps_replicas=0, training_worker_session_startup_stagger_secs=5, training_worker_max_startup_secs=60, eval_delay_secs=60, eval_steps=100, num_cores=4, verbose=1, gpu_memory_fraction=1, tf_random_seed=42, keep_checkpoint_max=5, keep_checkpoint_every_n_hours=10000)` {#RunConfig.__init__} + + + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.learn.evaluate.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.learn.evaluate.md deleted file mode 100644 index 022662c3f6..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.learn.evaluate.md +++ /dev/null @@ -1,44 +0,0 @@ -### `tf.contrib.learn.evaluate(graph, output_dir, checkpoint_path, eval_dict, update_op=None, global_step_tensor=None, supervisor_master='', log_every_steps=10, feed_fn=None, max_steps=None)` {#evaluate} - -Evaluate a model loaded from a checkpoint. - -Given `graph`, a directory to write summaries to (`output_dir`), a checkpoint -to restore variables from, and a `dict` of `Tensor`s to evaluate, run an eval -loop for `max_steps` steps. - -In each step of evaluation, all tensors in the `eval_dict` are evaluated, and -every `log_every_steps` steps, they are logged. At the very end of evaluation, -a summary is evaluated (finding the summary ops using `Supervisor`'s logic) -and written to `output_dir`. - -##### Args: - - -* `graph`: A `Graph` to train. It is expected that this graph is not in use - elsewhere. -* `output_dir`: A string containing the directory to write a summary to. -* `checkpoint_path`: A string containing the path to a checkpoint to restore. - Can be `None` if the graph doesn't require loading any variables. -* `eval_dict`: A `dict` mapping string names to tensors to evaluate. It is - evaluated in every logging step. The result of the final evaluation is - returned. If update_op is None, then it's evaluated in every step. -* `update_op`: A `Tensor` which is run in every step. -* `global_step_tensor`: A `Variable` containing the global step. If `None`, - one is extracted from the graph using the same logic as in `Supervisor`. - Used to place eval summaries on training curves. -* `supervisor_master`: The master string to use when preparing the session. -* `log_every_steps`: Integer. Output logs every `log_every_steps` evaluation - steps. The logs contain the `eval_dict` and timing information. -* `feed_fn`: A function that is called every iteration to produce a `feed_dict` - passed to `session.run` calls. Optional. -* `max_steps`: Integer. Evaluate `eval_dict` this many times. - -##### Returns: - - A tuple `(eval_results, global_step)`: - -* `eval_results`: A `dict` mapping `string` to numeric values (`int`, `float`) - that are the result of running eval_dict in the last step. `None` if no - eval steps were run. -* `global_step`: The global step this evaluation corresponds to. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.learn.infer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.learn.infer.md deleted file mode 100644 index 616e74f3a4..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.learn.infer.md +++ /dev/null @@ -1,4 +0,0 @@ -### `tf.contrib.learn.infer(restore_checkpoint_path, output_dict, feed_dict=None)` {#infer} - - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.learn.run_feeds.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.learn.run_feeds.md new file mode 100644 index 0000000000..f5c3e977d0 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.learn.run_feeds.md @@ -0,0 +1,28 @@ +### `tf.contrib.learn.run_feeds(output_dict, feed_dicts, restore_checkpoint_path=None)` {#run_feeds} + +Run `output_dict` tensors with each input in `feed_dicts`. + +If `checkpoint_path` is supplied, restore from checkpoint. Otherwise, init all +variables. + +##### Args: + + +* `output_dict`: A `dict` mapping string names to `Tensor` objects to run. + Tensors must all be from the same graph. +* `feed_dicts`: Iterable of `dict` objects of input values to feed. +* `restore_checkpoint_path`: A string containing the path to a checkpoint to + restore. + +##### Returns: + + A list of dicts of values read from `output_dict` tensors, one item in the + list for each item in `feed_dicts`. Keys are the same as `output_dict`, + values are the results read from the corresponding `Tensor` in + `output_dict`. + +##### Raises: + + +* `ValueError`: if `output_dict` or `feed_dicts` is None or empty. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.metrics.streaming_mean.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.metrics.streaming_mean.md deleted file mode 100644 index 780ecbaa7b..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.metrics.streaming_mean.md +++ /dev/null @@ -1,44 +0,0 @@ -### `tf.contrib.metrics.streaming_mean(values, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_mean} - -Computes the (weighted) mean of the given values. - -The `streaming_mean` function creates two local variables, `total` and `count` -that are used to compute the average of `values`. This average is ultimately -returned as `mean` which is an idempotent operation that simply divides -`total` by `count`. To facilitate the estimation of a mean over a stream -of data, the function creates an `update_op` operation whose behavior is -dependent on the value of `weights`. If `weights` is None, then `update_op` -increments `total` with the reduced sum of `values` and increments `count` -with the number of elements in `values`. If `weights` is not `None`, then -`update_op` increments `total` with the reduced sum of the product of `values` -and `weights` and increments `count` with the reduced sum of weights. -In addition to performing the updates, `update_op` also returns the -`mean`. - -##### Args: - - -* `values`: A `Tensor` of arbitrary dimensions. -* `weights`: An optional set of weights of the same shape as `values`. If - `weights` is not None, the function computes a weighted mean. -* `metrics_collections`: An optional list of collections that `mean` - should be added to. -* `updates_collections`: An optional list of collections that `update_op` - should be added to. -* `name`: An optional variable_op_scope name. - -##### Returns: - - -* `mean`: A tensor representing the current mean, the value of `total` divided - by `count`. -* `update_op`: An operation that increments the `total` and `count` variables - appropriately and whose value matches `mean_value`. - -##### Raises: - - -* `ValueError`: If `weights` is not `None` and its shape doesn't match `values` - or if either `metrics_collections` or `updates_collections` are not a list - or tuple. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.metrics.streaming_mean_cosine_distance.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.metrics.streaming_mean_cosine_distance.md deleted file mode 100644 index 1900cd1a97..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.metrics.streaming_mean_cosine_distance.md +++ /dev/null @@ -1,48 +0,0 @@ -### `tf.contrib.metrics.streaming_mean_cosine_distance(predictions, labels, dim, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_mean_cosine_distance} - -Computes the cosine distance between the labels and predictions. - -The `streaming_mean_cosine_distance` function creates two local variables, -`total` and `count` that are used to compute the average cosine distance -between `predictions` and `labels`. This average is ultimately returned as -`mean_distance` which is an idempotent operation that simply divides `total` -by `count. To facilitate the estimation of a mean over multiple batches -of data, the function creates an `update_op` operation whose behavior is -dependent on the value of `weights`. If `weights` is None, then `update_op` -increments `total` with the reduced sum of `values and increments `count` with -the number of elements in `values`. If `weights` is not `None`, then -`update_op` increments `total` with the reduced sum of the product of `values` -and `weights` and increments `count` with the reduced sum of weights. - -##### Args: - - -* `predictions`: A tensor of the same size as labels. -* `labels`: A tensor of arbitrary size. -* `dim`: The dimension along which the cosine distance is computed. -* `weights`: An optional set of weights which indicates which predictions to - ignore during metric computation. Its size matches that of labels except - for the value of 'dim' which should be 1. For example if labels has - dimensions [32, 100, 200, 3], then `weights` should have dimensions - [32, 100, 200, 1]. -* `metrics_collections`: An optional list of collections that the metric - value variable should be added to. -* `updates_collections`: An optional list of collections that the metric update - ops should be added to. -* `name`: An optional variable_op_scope name. - -##### Returns: - - -* `mean_distance`: A tensor representing the current mean, the value of `total` - divided by `count`. -* `update_op`: An operation that increments the `total` and `count` variables - appropriately. - -##### Raises: - - -* `ValueError`: If labels and predictions are of different sizes or if the - ignore_mask is of the wrong size or if either `metrics_collections` or - `updates_collections` are not a list or tuple. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.decode_csv.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.decode_csv.md new file mode 100644 index 0000000000..f2ebf6945b --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.decode_csv.md @@ -0,0 +1,26 @@ +### `tf.decode_csv(records, record_defaults, field_delim=None, name=None)` {#decode_csv} + +Convert CSV records to tensors. Each column maps to one tensor. + +RFC 4180 format is expected for the CSV records. +(https://tools.ietf.org/html/rfc4180) +Note that we allow leading and trailing spaces with int or float field. + +##### Args: + + +* `records`: A `Tensor` of type `string`. + Each string is a record/row in the csv and all records should have + the same format. +* `record_defaults`: A list of `Tensor` objects with types from: `float32`, `int32`, `int64`, `string`. + One tensor per column of the input record, with either a + scalar default value for that column or empty if the column is required. +* `field_delim`: An optional `string`. Defaults to `","`. + delimiter to separate fields in a record. +* `name`: A name for the operation (optional). + +##### Returns: + + A list of `Tensor` objects. Has the same type as `record_defaults`. + Each tensor will have the same shape as records. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.digamma.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.digamma.md new file mode 100644 index 0000000000..5af2d11062 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.digamma.md @@ -0,0 +1,16 @@ +### `tf.digamma(x, name=None)` {#digamma} + +Computes Psi, the derivative of Lgamma (the log of the absolute value of + +`Gamma(x)`), element-wise. + +##### Args: + + +* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `x`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.dynamic_partition.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.dynamic_partition.md new file mode 100644 index 0000000000..3fbb885055 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.dynamic_partition.md @@ -0,0 +1,50 @@ +### `tf.dynamic_partition(data, partitions, num_partitions, name=None)` {#dynamic_partition} + +Partitions `data` into `num_partitions` tensors using indices from `partitions`. + +For each index tuple `js` of size `partitions.ndim`, the slice `data[js, ...]` +becomes part of `outputs[partitions[js]]`. The slices with `partitions[js] = i` +are placed in `outputs[i]` in lexicographic order of `js`, and the first +dimension of `outputs[i]` is the number of entries in `partitions` equal to `i`. +In detail, + + outputs[i].shape = [sum(partitions == i)] + data.shape[partitions.ndim:] + + outputs[i] = pack([data[js, ...] for js if partitions[js] == i]) + +`data.shape` must start with `partitions.shape`. + +For example: + + # Scalar partitions + partitions = 1 + num_partitions = 2 + data = [10, 20] + outputs[0] = [] # Empty with shape [0, 2] + outputs[1] = [[10, 20]] + + # Vector partitions + partitions = [0, 0, 1, 1, 0] + num_partitions = 2 + data = [10, 20, 30, 40, 50] + outputs[0] = [10, 20, 50] + outputs[1] = [30, 40] + +
+ +
+ +##### Args: + + +* `data`: A `Tensor`. +* `partitions`: A `Tensor` of type `int32`. + Any shape. Indices in the range `[0, num_partitions)`. +* `num_partitions`: An `int` that is `>= 1`. + The number of partitions to output. +* `name`: A name for the operation (optional). + +##### Returns: + + A list of `num_partitions` `Tensor` objects of the same type as data. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.equal.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.equal.md new file mode 100644 index 0000000000..998db9189f --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.equal.md @@ -0,0 +1,15 @@ +### `tf.equal(x, y, name=None)` {#equal} + +Returns the truth value of (x == y) element-wise. + +##### Args: + + +* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`, `complex64`, `quint8`, `qint8`, `qint32`, `string`, `bool`, `complex128`. +* `y`: A `Tensor`. Must have the same type as `x`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `bool`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.errors.AbortedError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.errors.AbortedError.md new file mode 100644 index 0000000000..f2bc775dcb --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.errors.AbortedError.md @@ -0,0 +1,15 @@ +The operation was aborted, typically due to a concurrent action. + +For example, running a +[`queue.enqueue()`](../../api_docs/python/io_ops.md#QueueBase.enqueue) +operation may raise `AbortedError` if a +[`queue.close()`](../../api_docs/python/io_ops.md#QueueBase.close) operation +previously ran. + +- - - + +#### `tf.errors.AbortedError.__init__(node_def, op, message)` {#AbortedError.__init__} + +Creates an `AbortedError`. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.errors.PermissionDeniedError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.errors.PermissionDeniedError.md new file mode 100644 index 0000000000..a8a81494c8 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.errors.PermissionDeniedError.md @@ -0,0 +1,14 @@ +Raised when the caller does not have permission to run an operation. + +For example, running the +[`tf.WholeFileReader.read()`](../../api_docs/python/io_ops.md#WholeFileReader) +operation could raise `PermissionDeniedError` if it receives the name of a +file for which the user does not have the read file permission. + +- - - + +#### `tf.errors.PermissionDeniedError.__init__(node_def, op, message)` {#PermissionDeniedError.__init__} + +Creates a `PermissionDeniedError`. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.errors.ResourceExhaustedError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.errors.ResourceExhaustedError.md new file mode 100644 index 0000000000..a01e255be5 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.errors.ResourceExhaustedError.md @@ -0,0 +1,12 @@ +Some resource has been exhausted. + +For example, this error might be raised if a per-user quota is +exhausted, or perhaps the entire file system is out of space. + +- - - + +#### `tf.errors.ResourceExhaustedError.__init__(node_def, op, message)` {#ResourceExhaustedError.__init__} + +Creates a `ResourceExhaustedError`. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.errors.UnauthenticatedError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.errors.UnauthenticatedError.md deleted file mode 100644 index d3344dc6b1..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.errors.UnauthenticatedError.md +++ /dev/null @@ -1,11 +0,0 @@ -The request does not have valid authentication credentials. - -This exception is not currently used. - -- - - - -#### `tf.errors.UnauthenticatedError.__init__(node_def, op, message)` {#UnauthenticatedError.__init__} - -Creates an `UnauthenticatedError`. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.expand_dims.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.expand_dims.md new file mode 100644 index 0000000000..a188cda506 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.expand_dims.md @@ -0,0 +1,50 @@ +### `tf.expand_dims(input, dim, name=None)` {#expand_dims} + +Inserts a dimension of 1 into a tensor's shape. + +Given a tensor `input`, this operation inserts a dimension of 1 at the +dimension index `dim` of `input`'s shape. The dimension index `dim` starts at +zero; if you specify a negative number for `dim` it is counted backward from +the end. + +This operation is useful if you want to add a batch dimension to a single +element. For example, if you have a single image of shape `[height, width, +channels]`, you can make it a batch of 1 image with `expand_dims(image, 0)`, +which will make the shape `[1, height, width, channels]`. + +Other examples: + +```prettyprint +# 't' is a tensor of shape [2] +shape(expand_dims(t, 0)) ==> [1, 2] +shape(expand_dims(t, 1)) ==> [2, 1] +shape(expand_dims(t, -1)) ==> [2, 1] + +# 't2' is a tensor of shape [2, 3, 5] +shape(expand_dims(t2, 0)) ==> [1, 2, 3, 5] +shape(expand_dims(t2, 2)) ==> [2, 3, 1, 5] +shape(expand_dims(t2, 3)) ==> [2, 3, 5, 1] +``` + +This operation requires that: + +`-1-input.dims() <= dim <= input.dims()` + +This operation is related to `squeeze()`, which removes dimensions of +size 1. + +##### Args: + + +* `input`: A `Tensor`. +* `dim`: A `Tensor` of type `int32`. + 0-D (scalar). Specifies the dimension index at which to + expand the shape of `input`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `input`. + Contains the same data as `input`, but its shape has an additional + dimension of size 1 added. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.fill.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.fill.md new file mode 100644 index 0000000000..b6e51fa634 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.fill.md @@ -0,0 +1,26 @@ +### `tf.fill(dims, value, name=None)` {#fill} + +Creates a tensor filled with a scalar value. + +This operation creates a tensor of shape `dims` and fills it with `value`. + +For example: + +```prettyprint +# Output tensor has shape [2, 3]. +fill([2, 3], 9) ==> [[9, 9, 9] + [9, 9, 9]] +``` + +##### Args: + + +* `dims`: A `Tensor` of type `int32`. + 1-D. Represents the shape of the output tensor. +* `value`: A `Tensor`. 0-D (scalar). Value to fill the returned tensor. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `value`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.get_default_graph.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.get_default_graph.md deleted file mode 100644 index bd734d1b98..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.get_default_graph.md +++ /dev/null @@ -1,17 +0,0 @@ -### `tf.get_default_graph()` {#get_default_graph} - -Returns the default graph for the current thread. - -The returned graph will be the innermost graph on which a -`Graph.as_default()` context has been entered, or a global default -graph if none has been explicitly created. - -NOTE: The default graph is a property of the current thread. If you -create a new thread, and wish to use the default graph in that -thread, you must explicitly add a `with g.as_default():` in that -thread's function. - -##### Returns: - - The default `Graph` being used in the current thread. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.get_seed.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.get_seed.md deleted file mode 100644 index ccf6712418..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.get_seed.md +++ /dev/null @@ -1,22 +0,0 @@ -### `tf.get_seed(op_seed)` {#get_seed} - -Returns the local seeds an operation should use given an op-specific seed. - -Given operation-specific seed, `op_seed`, this helper function returns two -seeds derived from graph-level and op-level seeds. Many random operations -internally use the two seeds to allow user to change the seed globally for a -graph, or for only specific operations. - -For details on how the graph-level seed interacts with op seeds, see -[`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed). - -##### Args: - - -* `op_seed`: integer. - -##### Returns: - - A tuple of two integers that should be used for the local seed of this - operation. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.get_session_tensor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.get_session_tensor.md new file mode 100644 index 0000000000..215647a989 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.get_session_tensor.md @@ -0,0 +1,22 @@ +### `tf.get_session_tensor(dtype, name=None)` {#get_session_tensor} + +Get the tensor of type `dtype` by feeding a tensor handle. + +This is EXPERIMENTAL and subject to change. + +Get the value of the tensor from a tensor handle. The tensor +is produced in a previous run() and stored in the state of the +session. + +##### Args: + + +* `dtype`: The type of the output tensor. +* `name`: Optional name prefix for the return tensor. + +##### Returns: + + A pair of tensors. The first is a placeholder for feeding a + tensor handle and the second is the tensor in the session state + keyed by the tensor handle. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.global_norm.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.global_norm.md new file mode 100644 index 0000000000..d37d4228b2 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.global_norm.md @@ -0,0 +1,27 @@ +### `tf.global_norm(t_list, name=None)` {#global_norm} + +Computes the global norm of multiple tensors. + +Given a tuple or list of tensors `t_list`, this operation returns the +global norm of the elements in all tensors in `t_list`. The global norm is +computed as: + +`global_norm = sqrt(sum([l2norm(t)**2 for t in t_list]))` + +Any entries in `t_list` that are of type None are ignored. + +##### Args: + + +* `t_list`: A tuple or list of mixed `Tensors`, `IndexedSlices`, or None. +* `name`: A name for the operation (optional). + +##### Returns: + + A 0-D (scalar) `Tensor` of type `float`. + +##### Raises: + + +* `TypeError`: If `t_list` is not a sequence. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.ifft2d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.ifft2d.md new file mode 100644 index 0000000000..0ca8eb8dc1 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.ifft2d.md @@ -0,0 +1,15 @@ +### `tf.ifft2d(input, name=None)` {#ifft2d} + +Compute the inverse 2-dimensional discrete Fourier Transform. + +##### Args: + + +* `input`: A `Tensor` of type `complex64`. A complex64 matrix. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `complex64`. + The inverse 2D Fourier Transform of `input`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.adjust_brightness.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.adjust_brightness.md new file mode 100644 index 0000000000..7743f0180c --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.adjust_brightness.md @@ -0,0 +1,25 @@ +### `tf.image.adjust_brightness(image, delta)` {#adjust_brightness} + +Adjust the brightness of RGB or Grayscale images. + +This is a convenience method that converts an RGB image to float +representation, adjusts its brightness, and then converts it back to the +original data type. If several adjustments are chained it is advisable to +minimize the number of redundant conversions. + +The value `delta` is added to all components of the tensor `image`. Both +`image` and `delta` are converted to `float` before adding (and `image` is +scaled appropriately if it is in fixed-point representation). For regular +images, `delta` should be in the range `[0,1)`, as it is added to the image in +floating point representation, where pixel values are in the `[0,1)` range. + +##### Args: + + +* `image`: A tensor. +* `delta`: A scalar. Amount to add to the pixel values. + +##### Returns: + + A brightness-adjusted tensor of the same shape and type as `image`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.encode_jpeg.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.encode_jpeg.md new file mode 100644 index 0000000000..24b1886c10 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.encode_jpeg.md @@ -0,0 +1,51 @@ +### `tf.image.encode_jpeg(image, format=None, quality=None, progressive=None, optimize_size=None, chroma_downsampling=None, density_unit=None, x_density=None, y_density=None, xmp_metadata=None, name=None)` {#encode_jpeg} + +JPEG-encode an image. + +`image` is a 3-D uint8 Tensor of shape `[height, width, channels]`. + +The attr `format` can be used to override the color format of the encoded +output. Values can be: + +* `''`: Use a default format based on the number of channels in the image. +* `grayscale`: Output a grayscale JPEG image. The `channels` dimension + of `image` must be 1. +* `rgb`: Output an RGB JPEG image. The `channels` dimension + of `image` must be 3. + +If `format` is not specified or is the empty string, a default format is picked +in function of the number of channels in `image`: + +* 1: Output a grayscale image. +* 3: Output an RGB image. + +##### Args: + + +* `image`: A `Tensor` of type `uint8`. + 3-D with shape `[height, width, channels]`. +* `format`: An optional `string` from: `"", "grayscale", "rgb"`. Defaults to `""`. + Per pixel image format. +* `quality`: An optional `int`. Defaults to `95`. + Quality of the compression from 0 to 100 (higher is better and slower). +* `progressive`: An optional `bool`. Defaults to `False`. + If True, create a JPEG that loads progressively (coarse to fine). +* `optimize_size`: An optional `bool`. Defaults to `False`. + If True, spend CPU/RAM to reduce size with no quality change. +* `chroma_downsampling`: An optional `bool`. Defaults to `True`. + See http://en.wikipedia.org/wiki/Chroma_subsampling. +* `density_unit`: An optional `string` from: `"in", "cm"`. Defaults to `"in"`. + Unit used to specify `x_density` and `y_density`: + pixels per inch (`'in'`) or centimeter (`'cm'`). +* `x_density`: An optional `int`. Defaults to `300`. + Horizontal pixels per density unit. +* `y_density`: An optional `int`. Defaults to `300`. + Vertical pixels per density unit. +* `xmp_metadata`: An optional `string`. Defaults to `""`. + If not empty, embed this XMP metadata in the image header. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `string`. 0-D. JPEG-encoded image. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.encode_png.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.encode_png.md new file mode 100644 index 0000000000..fa073a771f --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.encode_png.md @@ -0,0 +1,28 @@ +### `tf.image.encode_png(image, compression=None, name=None)` {#encode_png} + +PNG-encode an image. + +`image` is a 3-D uint8 or uint16 Tensor of shape `[height, width, channels]` +where `channels` is: + +* 1: for grayscale. +* 2: for grayscale + alpha. +* 3: for RGB. +* 4: for RGBA. + +The ZLIB compression level, `compression`, can be -1 for the PNG-encoder +default or a value from 0 to 9. 9 is the highest compression level, generating +the smallest output, but is slower. + +##### Args: + + +* `image`: A `Tensor`. Must be one of the following types: `uint8`, `uint16`. + 3-D with shape `[height, width, channels]`. +* `compression`: An optional `int`. Defaults to `-1`. Compression level. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `string`. 0-D. PNG-encoded image. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.pad_to_bounding_box.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.pad_to_bounding_box.md deleted file mode 100644 index 04c155c03c..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.pad_to_bounding_box.md +++ /dev/null @@ -1,30 +0,0 @@ -### `tf.image.pad_to_bounding_box(image, offset_height, offset_width, target_height, target_width)` {#pad_to_bounding_box} - -Pad `image` with zeros to the specified `height` and `width`. - -Adds `offset_height` rows of zeros on top, `offset_width` columns of -zeros on the left, and then pads the image on the bottom and right -with zeros until it has dimensions `target_height`, `target_width`. - -This op does nothing if `offset_*` is zero and the image already has size -`target_height` by `target_width`. - -##### Args: - - -* `image`: 3-D tensor with shape `[height, width, channels]` -* `offset_height`: Number of rows of zeros to add on top. -* `offset_width`: Number of columns of zeros to add on the left. -* `target_height`: Height of output image. -* `target_width`: Width of output image. - -##### Returns: - - 3-D tensor of shape `[target_height, target_width, channels]` - -##### Raises: - - -* `ValueError`: If the shape of `image` is incompatible with the `offset_*` or - `target_*` arguments - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.random_flip_up_down.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.random_flip_up_down.md deleted file mode 100644 index 7ed36f5df2..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.random_flip_up_down.md +++ /dev/null @@ -1,24 +0,0 @@ -### `tf.image.random_flip_up_down(image, seed=None)` {#random_flip_up_down} - -Randomly flips an image vertically (upside down). - -With a 1 in 2 chance, outputs the contents of `image` flipped along the first -dimension, which is `height`. Otherwise output the image as-is. - -##### Args: - - -* `image`: A 3-D tensor of shape `[height, width, channels].` -* `seed`: A Python integer. Used to create a random seed. See - [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) - for behavior. - -##### Returns: - - A 3-D tensor of the same type and shape as `image`. - -##### Raises: - - -* `ValueError`: if the shape of `image` not supported. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.resize_bilinear.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.resize_bilinear.md new file mode 100644 index 0000000000..a9580ca199 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.resize_bilinear.md @@ -0,0 +1,24 @@ +### `tf.image.resize_bilinear(images, size, align_corners=None, name=None)` {#resize_bilinear} + +Resize `images` to `size` using bilinear interpolation. + +Input images can be of different types but output images are always float. + +##### Args: + + +* `images`: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int16`, `int32`, `int64`, `half`, `float32`, `float64`. + 4-D with shape `[batch, height, width, channels]`. +* `size`: A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The + new size for the images. +* `align_corners`: An optional `bool`. Defaults to `False`. + If true, rescale input by (new_height - 1) / (height - 1), which + exactly aligns the 4 corners of images and resized images. If false, rescale + by new_height / height. Treat similarly the width dimension. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `float32`. 4-D with shape + `[batch, new_height, new_width, channels]`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.rgb_to_grayscale.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.rgb_to_grayscale.md deleted file mode 100644 index bf9b6846e0..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.rgb_to_grayscale.md +++ /dev/null @@ -1,19 +0,0 @@ -### `tf.image.rgb_to_grayscale(images, name=None)` {#rgb_to_grayscale} - -Converts one or more images from RGB to Grayscale. - -Outputs a tensor of the same `DType` and rank as `images`. The size of the -last dimension of the output is 1, containing the Grayscale value of the -pixels. - -##### Args: - - -* `images`: The RGB tensor to convert. Last dimension must have size 3 and - should contain RGB values. -* `name`: A name for the operation (optional). - -##### Returns: - - The converted grayscale image(s). - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.initialize_local_variables.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.initialize_local_variables.md deleted file mode 100644 index 2a56dbb9d6..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.initialize_local_variables.md +++ /dev/null @@ -1,10 +0,0 @@ -### `tf.initialize_local_variables()` {#initialize_local_variables} - -Returns an Op that initializes all local variables. - -This is just a shortcut for `initialize_variables(local_variables())` - -##### Returns: - - An Op that initializes all local variables in the graph. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.inv.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.inv.md deleted file mode 100644 index dfff52be12..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.inv.md +++ /dev/null @@ -1,16 +0,0 @@ -### `tf.inv(x, name=None)` {#inv} - -Computes the reciprocal of x element-wise. - -I.e., \\(y = 1 / x\\). - -##### Args: - - -* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `x`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.is_finite.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.is_finite.md new file mode 100644 index 0000000000..db038e9919 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.is_finite.md @@ -0,0 +1,14 @@ +### `tf.is_finite(x, name=None)` {#is_finite} + +Returns which elements of x are finite. + +##### Args: + + +* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `bool`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.is_variable_initialized.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.is_variable_initialized.md new file mode 100644 index 0000000000..d8383439ab --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.is_variable_initialized.md @@ -0,0 +1,14 @@ +### `tf.is_variable_initialized(variable)` {#is_variable_initialized} + +Tests if a variable has been initialized. + +##### Args: + + +* `variable`: A `Variable`. + +##### Returns: + + Returns a scalar boolean Tensor, `True` if the variable has been + initialized, `False` otherwise. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.lbeta.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.lbeta.md new file mode 100644 index 0000000000..e3ee18dfb3 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.lbeta.md @@ -0,0 +1,31 @@ +### `tf.lbeta(x, name='lbeta')` {#lbeta} + +Computes `ln(|Beta(x)|)`, reducing along the last dimension. + +Given one-dimensional `z = [z_0,...,z_{K-1}]`, we define + +```Beta(z) = \prod_j Gamma(z_j) / Gamma(\sum_j z_j)``` + +And for `n + 1` dimensional `x` with shape `[N1, ..., Nn, K]`, we define +`lbeta(x)[i1, ..., in] = Log(|Beta(x[i1, ..., in, :])|)`. In other words, +the last dimension is treated as the `z` vector. + +Note that if `z = [u, v]`, then +`Beta(z) = int_0^1 t^{u-1} (1 - t)^{v-1} dt`, which defines the traditional +bivariate beta function. + +##### Args: + + +* `x`: A rank `n + 1` `Tensor` with type `float`, or `double`. +* `name`: A name for the operation (optional). + +##### Returns: + + The logarithm of `|Beta(x)|` reducing along the last dimension. + +##### Raises: + + +* `ValueError`: If `x` is empty with rank one or less. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.less.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.less.md new file mode 100644 index 0000000000..8791d0366a --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.less.md @@ -0,0 +1,15 @@ +### `tf.less(x, y, name=None)` {#less} + +Returns the truth value of (x < y) element-wise. + +##### Args: + + +* `x`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`. +* `y`: A `Tensor`. Must have the same type as `x`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `bool`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.load_file_system_library.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.load_file_system_library.md new file mode 100644 index 0000000000..60d768a624 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.load_file_system_library.md @@ -0,0 +1,23 @@ +### `tf.load_file_system_library(library_filename)` {#load_file_system_library} + +Loads a TensorFlow plugin, containing file system implementation. + +Pass `library_filename` to a platform-specific mechanism for dynamically +loading a library. The rules for determining the exact location of the +library are platform-specific and are not documented here. + +##### Args: + + +* `library_filename`: Path to the plugin. + Relative or absolute filesystem path to a dynamic library file. + +##### Returns: + + None. + +##### Raises: + + +* `RuntimeError`: when unable to load the library. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.logical_not.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.logical_not.md deleted file mode 100644 index 40a0bb2e43..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.logical_not.md +++ /dev/null @@ -1,14 +0,0 @@ -### `tf.logical_not(x, name=None)` {#logical_not} - -Returns the truth value of NOT x element-wise. - -##### Args: - - -* `x`: A `Tensor` of type `bool`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `bool`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.logical_or.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.logical_or.md new file mode 100644 index 0000000000..be18e65e92 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.logical_or.md @@ -0,0 +1,15 @@ +### `tf.logical_or(x, y, name=None)` {#logical_or} + +Returns the truth value of x OR y element-wise. + +##### Args: + + +* `x`: A `Tensor` of type `bool`. +* `y`: A `Tensor` of type `bool`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `bool`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.logical_xor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.logical_xor.md deleted file mode 100644 index 20db3e60a6..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.logical_xor.md +++ /dev/null @@ -1,4 +0,0 @@ -### `tf.logical_xor(x, y, name='LogicalXor')` {#logical_xor} - -x ^ y = (x | y) & ~(x & y). - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.matrix_inverse.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.matrix_inverse.md deleted file mode 100644 index 4172badef5..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.matrix_inverse.md +++ /dev/null @@ -1,27 +0,0 @@ -### `tf.matrix_inverse(input, adjoint=None, name=None)` {#matrix_inverse} - -Calculates the inverse of a square invertible matrix or its adjoint (conjugate - -transpose). - -The op uses LU decomposition with partial pivoting to compute the inverse. - -If the matrix is not invertible there is no guarantee what the op does. It -may detect the condition and raise an exception or it may simply return a -garbage result. - -##### Args: - - -* `input`: A `Tensor`. Must be one of the following types: `float64`, `float32`. - Shape is `[M, M]`. -* `adjoint`: An optional `bool`. Defaults to `False`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `input`. - Shape is `[M, M]`. If `adjoint` is `False` then `output` contains the - matrix inverse of `input`. If `adjoint` is `True` then `output` contains the - matrix inverse of the adjoint of `input`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.matrix_triangular_solve.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.matrix_triangular_solve.md deleted file mode 100644 index 5787145231..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.matrix_triangular_solve.md +++ /dev/null @@ -1,35 +0,0 @@ -### `tf.matrix_triangular_solve(matrix, rhs, lower=None, adjoint=None, name=None)` {#matrix_triangular_solve} - -Solves a system of linear equations with an upper or lower triangular matrix by - -backsubstitution. - -`matrix` is a matrix of shape `[M, M]`. If `lower` is `True` then the strictly -upper triangular part of `matrix` is assumed to be zero and not accessed. -If `lower` is False then the strictly lower triangular part of `matrix` is -assumed to be zero and not accessed. -`rhs` is a matrix of shape [M, K]`. - -The output is a matrix of shape `[M, K]`. If `adjoint` is `False` the output -satisfies the matrix equation `matrix` * `output` = `rhs`. -If `adjoint` is `False` then `output` satisfies the matrix equation -`matrix` * `output` = `rhs`. -If `adjoint` is `True` then `output` satisfies the matrix equation -`adjoint(matrix)` * `output` = `rhs`. - -##### Args: - - -* `matrix`: A `Tensor`. Must be one of the following types: `float64`, `float32`. - Shape is `[M, M]`. -* `rhs`: A `Tensor`. Must have the same type as `matrix`. Shape is `[M, K]`. -* `lower`: An optional `bool`. Defaults to `True`. - Boolean indicating whether `matrix` is lower or upper triangular -* `adjoint`: An optional `bool`. Defaults to `False`. - Boolean indicating whether to solve with `matrix` or its adjoint. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `matrix`. Shape is `[M, K]`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.mod.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.mod.md new file mode 100644 index 0000000000..5bfe1058a7 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.mod.md @@ -0,0 +1,15 @@ +### `tf.mod(x, y, name=None)` {#mod} + +Returns element-wise remainder of division. + +##### Args: + + +* `x`: A `Tensor`. Must be one of the following types: `int32`, `int64`, `float32`, `float64`. +* `y`: A `Tensor`. Must have the same type as `x`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `x`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.neg.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.neg.md new file mode 100644 index 0000000000..519fd9a875 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.neg.md @@ -0,0 +1,16 @@ +### `tf.neg(x, name=None)` {#neg} + +Computes numerical negative value element-wise. + +I.e., \\(y = -x\\). + +##### Args: + + +* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `x`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.compute_accidental_hits.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.compute_accidental_hits.md new file mode 100644 index 0000000000..9d5bb30303 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.compute_accidental_hits.md @@ -0,0 +1,45 @@ +### `tf.nn.compute_accidental_hits(true_classes, sampled_candidates, num_true, seed=None, name=None)` {#compute_accidental_hits} + +Compute the position ids in `sampled_candidates` matching `true_classes`. + +In Candidate Sampling, this operation facilitates virtually removing +sampled classes which happen to match target classes. This is done +in Sampled Softmax and Sampled Logistic. + +See our [Candidate Sampling Algorithms +Reference](http://www.tensorflow.org/extras/candidate_sampling.pdf). + +We presuppose that the `sampled_candidates` are unique. + +We call it an 'accidental hit' when one of the target classes +matches one of the sampled classes. This operation reports +accidental hits as triples `(index, id, weight)`, where `index` +represents the row number in `true_classes`, `id` represents the +position in `sampled_candidates`, and weight is `-FLOAT_MAX`. + +The result of this op should be passed through a `sparse_to_dense` +operation, then added to the logits of the sampled classes. This +removes the contradictory effect of accidentally sampling the true +target classes as noise classes for the same example. + +##### Args: + + +* `true_classes`: A `Tensor` of type `int64` and shape `[batch_size, + num_true]`. The target classes. +* `sampled_candidates`: A tensor of type `int64` and shape `[num_sampled]`. + The sampled_candidates output of CandidateSampler. +* `num_true`: An `int`. The number of target classes per training example. +* `seed`: An `int`. An operation-specific seed. Default is 0. +* `name`: A name for the operation (optional). + +##### Returns: + + +* `indices`: A `Tensor` of type `int32` and shape `[num_accidental_hits]`. + Values indicate rows in `true_classes`. +* `ids`: A `Tensor` of type `int64` and shape `[num_accidental_hits]`. + Values indicate positions in `sampled_candidates`. +* `weights`: A `Tensor` of type `float` and shape `[num_accidental_hits]`. + Each value is `-FLOAT_MAX`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.conv2d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.conv2d.md new file mode 100644 index 0000000000..684a3d5727 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.conv2d.md @@ -0,0 +1,49 @@ +### `tf.nn.conv2d(input, filter, strides, padding, use_cudnn_on_gpu=None, data_format=None, name=None)` {#conv2d} + +Computes a 2-D convolution given 4-D `input` and `filter` tensors. + +Given an input tensor of shape `[batch, in_height, in_width, in_channels]` +and a filter / kernel tensor of shape +`[filter_height, filter_width, in_channels, out_channels]`, this op +performs the following: + +1. Flattens the filter to a 2-D matrix with shape + `[filter_height * filter_width * in_channels, output_channels]`. +2. Extracts image patches from the input tensor to form a *virtual* + tensor of shape `[batch, out_height, out_width, + filter_height * filter_width * in_channels]`. +3. For each patch, right-multiplies the filter matrix and the image patch + vector. + +In detail, with the default NHWC format, + + output[b, i, j, k] = + sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * + filter[di, dj, q, k] + +Must have `strides[0] = strides[3] = 1`. For the most common case of the same +horizontal and vertices strides, `strides = [1, stride, stride, 1]`. + +##### Args: + + +* `input`: A `Tensor`. Must be one of the following types: `float32`, `float64`. +* `filter`: A `Tensor`. Must have the same type as `input`. +* `strides`: A list of `ints`. + 1-D of length 4. The stride of the sliding window for each dimension + of `input`. Must be in the same order as the dimension specified with format. +* `padding`: A `string` from: `"SAME", "VALID"`. + The type of padding algorithm to use. +* `use_cudnn_on_gpu`: An optional `bool`. Defaults to `True`. +* `data_format`: An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. + Specify the data format of the input and output data. With the + default format "NHWC", the data is stored in the order of: + [batch, in_height, in_width, in_channels]. + Alternatively, the format could be "NCHW", the data storage order of: + [batch, in_channels, in_height, in_width]. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `input`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.conv3d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.conv3d.md deleted file mode 100644 index 886744c595..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.conv3d.md +++ /dev/null @@ -1,29 +0,0 @@ -### `tf.nn.conv3d(input, filter, strides, padding, name=None)` {#conv3d} - -Computes a 3-D convolution given 5-D `input` and `filter` tensors. - -In signal processing, cross-correlation is a measure of similarity of -two waveforms as a function of a time-lag applied to one of them. This -is also known as a sliding dot product or sliding inner-product. - -Our Conv3D implements a form of cross-correlation. - -##### Args: - - -* `input`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`. - Shape `[batch, in_depth, in_height, in_width, in_channels]`. -* `filter`: A `Tensor`. Must have the same type as `input`. - Shape `[filter_depth, filter_height, filter_width, in_channels, out_channels]`. - `in_channels` must match between `input` and `filter`. -* `strides`: A list of `ints` that has length `>= 5`. - 1-D tensor of length 5. The stride of the sliding window for each - dimension of `input`. Must have `strides[0] = strides[4] = 1`. -* `padding`: A `string` from: `"SAME", "VALID"`. - The type of padding algorithm to use. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `input`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.depthwise_conv2d_native.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.depthwise_conv2d_native.md deleted file mode 100644 index c2736f1ba9..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.depthwise_conv2d_native.md +++ /dev/null @@ -1,37 +0,0 @@ -### `tf.nn.depthwise_conv2d_native(input, filter, strides, padding, name=None)` {#depthwise_conv2d_native} - -Computes a 2-D depthwise convolution given 4-D `input` and `filter` tensors. - -Given an input tensor of shape `[batch, in_height, in_width, in_channels]` -and a filter / kernel tensor of shape -`[filter_height, filter_width, in_channels, channel_multiplier]`, containing -`in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies -a different filter to each input channel (expanding from 1 channel to -`channel_multiplier` channels for each), then concatenates the results -together. Thus, the output has `in_channels * channel_multiplier` channels. - -for k in 0..in_channels-1 - for q in 0..channel_multiplier-1 - output[b, i, j, k * channel_multiplier + q] = - sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] * - filter[di, dj, k, q] - -Must have `strides[0] = strides[3] = 1`. For the most common case of the same -horizontal and vertices strides, `strides = [1, stride, stride, 1]`. - -##### Args: - - -* `input`: A `Tensor`. Must be one of the following types: `float32`, `float64`. -* `filter`: A `Tensor`. Must have the same type as `input`. -* `strides`: A list of `ints`. - 1-D of length 4. The stride of the sliding window for each dimension - of `input`. -* `padding`: A `string` from: `"SAME", "VALID"`. - The type of padding algorithm to use. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `input`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.elu.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.elu.md deleted file mode 100644 index cef8dedb50..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.elu.md +++ /dev/null @@ -1,17 +0,0 @@ -### `tf.nn.elu(features, name=None)` {#elu} - -Computes exponential linear: `exp(features) - 1` if < 0, `features` otherwise. - -See [Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) -](http://arxiv.org/abs/1511.07289) - -##### Args: - - -* `features`: A `Tensor`. Must be one of the following types: `float32`, `float64`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `features`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.in_top_k.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.in_top_k.md deleted file mode 100644 index f46780649d..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.in_top_k.md +++ /dev/null @@ -1,33 +0,0 @@ -### `tf.nn.in_top_k(predictions, targets, k, name=None)` {#in_top_k} - -Says whether the targets are in the top `K` predictions. - -This outputs a `batch_size` bool array, an entry `out[i]` is `true` if the -prediction for the target class is among the top `k` predictions among -all predictions for example `i`. Note that the behavior of `InTopK` differs -from the `TopK` op in its handling of ties; if multiple classes have the -same prediction value and straddle the top-`k` boundary, all of those -classes are considered to be in the top `k`. - -More formally, let - - \\(predictions_i\\) be the predictions for all classes for example `i`, - \\(targets_i\\) be the target class for example `i`, - \\(out_i\\) be the output for example `i`, - -$$out_i = predictions_{i, targets_i} \in TopKIncludingTies(predictions_i)$$ - -##### Args: - - -* `predictions`: A `Tensor` of type `float32`. - A `batch_size` x `classes` tensor. -* `targets`: A `Tensor`. Must be one of the following types: `int32`, `int64`. - A `batch_size` vector of class ids. -* `k`: An `int`. Number of top elements to look at for computing precision. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `bool`. Computed Precision at `k` as a `bool Tensor`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.top_k.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.top_k.md new file mode 100644 index 0000000000..819c0ad068 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.top_k.md @@ -0,0 +1,31 @@ +### `tf.nn.top_k(input, k=1, sorted=True, name=None)` {#top_k} + +Finds values and indices of the `k` largest entries for the last dimension. + +If the input is a vector (rank-1), finds the `k` largest entries in the vector +and outputs their values and indices as vectors. Thus `values[j]` is the +`j`-th largest entry in `input`, and its index is `indices[j]`. + +For matrices (resp. higher rank input), computes the top `k` entries in each +row (resp. vector along the last dimension). Thus, + + values.shape = indices.shape = input.shape[:-1] + [k] + +If two elements are equal, the lower-index element appears first. + +##### Args: + + +* `input`: 1-D or higher `Tensor` with last dimension at least `k`. +* `k`: 0-D `int32` `Tensor`. Number of top elements to look for along the last + dimension (along each row for matrices). +* `sorted`: If true the resulting `k` elements will be sorted by the values in + descending order. +* `name`: Optional name for the operation. + +##### Returns: + + +* `values`: The `k` largest elements along each last dimensional slice. +* `indices`: The indices of `values` within the last dimension of `input`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.uniform_candidate_sampler.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.uniform_candidate_sampler.md new file mode 100644 index 0000000000..c34056dc84 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.uniform_candidate_sampler.md @@ -0,0 +1,49 @@ +### `tf.nn.uniform_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, seed=None, name=None)` {#uniform_candidate_sampler} + +Samples a set of classes using a uniform base distribution. + +This operation randomly samples a tensor of sampled classes +(`sampled_candidates`) from the range of integers `[0, range_max)`. + +The elements of `sampled_candidates` are drawn without replacement +(if `unique=True`) or with replacement (if `unique=False`) from +the base distribution. + +The base distribution for this operation is the uniform distribution +over the range of integers `[0, range_max)`. + +In addition, this operation returns tensors `true_expected_count` +and `sampled_expected_count` representing the number of times each +of the target classes (`true_classes`) and the sampled +classes (`sampled_candidates`) is expected to occur in an average +tensor of sampled classes. These values correspond to `Q(y|x)` +defined in [this +document](http://www.tensorflow.org/extras/candidate_sampling.pdf). +If `unique=True`, then these are post-rejection probabilities and we +compute them approximately. + +##### Args: + + +* `true_classes`: A `Tensor` of type `int64` and shape `[batch_size, + num_true]`. The target classes. +* `num_true`: An `int`. The number of target classes per training example. +* `num_sampled`: An `int`. The number of classes to randomly sample per batch. +* `unique`: A `bool`. Determines whether all sampled classes in a batch are + unique. +* `range_max`: An `int`. The number of possible classes. +* `seed`: An `int`. An operation-specific seed. Default is 0. +* `name`: A name for the operation (optional). + +##### Returns: + + +* `sampled_candidates`: A tensor of type `int64` and shape `[num_sampled]`. + The sampled classes. +* `true_expected_count`: A tensor of type `float`. Same shape as + `true_classes`. The expected counts under the sampling distribution + of each of `true_classes`. +* `sampled_expected_count`: A tensor of type `float`. Same shape as + `sampled_candidates`. The expected counts under the sampling distribution + of each of `sampled_candidates`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.one_hot.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.one_hot.md new file mode 100644 index 0000000000..eebb6ab643 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.one_hot.md @@ -0,0 +1,129 @@ +### `tf.one_hot(indices, depth, on_value=None, off_value=None, axis=None, dtype=None, name=None)` {#one_hot} + +Returns a one-hot tensor. + +The locations represented by indices in `indices` take value `on_value`, +while all other locations take value `off_value`. + +`on_value` and `off_value` must have matching data types. If `dtype` is also +provided, they must be the same data type as specified by `dtype`. + +If `on_value` is not provided, it will default to the value `1` with type +`dtype` + +If `off_value` is not provided, it will default to the value `0` with type +`dtype` + +If the input `indices` is rank `N`, the output will have rank `N+1`. The +new axis is created at dimension `axis` (default: the new axis is appended +at the end). + +If `indices` is a scalar the output shape will be a vector of length `depth` + +If `indices` is a vector of length `features`, the output shape will be: +``` + features x depth if axis == -1 + depth x features if axis == 0 +``` + +If `indices` is a matrix (batch) with shape `[batch, features]`, the output +shape will be: +``` + batch x features x depth if axis == -1 + batch x depth x features if axis == 1 + depth x batch x features if axis == 0 +``` + +If `dtype` is not provided, it will attempt to assume the data type of +`on_value` or `off_value`, if one or both are passed in. If none of +`on_value`, `off_value`, or `dtype` are provided, `dtype` will default to the +value `tf.float32` + +Note: If a non-numeric data type output is desired (tf.string, tf.bool, etc.), +both `on_value` and `off_value` _must_ be provided to `one_hot` + +Examples +========= + +Suppose that + +``` + indices = [0, 2, -1, 1] + depth = 3 + on_value = 5.0 + off_value = 0.0 + axis = -1 +``` + +Then output is `[4 x 3]`: + +``` + output = + [5.0 0.0 0.0] // one_hot(0) + [0.0 0.0 5.0] // one_hot(2) + [0.0 0.0 0.0] // one_hot(-1) + [0.0 5.0 0.0] // one_hot(1) +``` + +Suppose that + +``` + indices = [[0, 2], [1, -1]] + depth = 3 + on_value = 1.0 + off_value = 0.0 + axis = -1 +``` + +Then output is `[2 x 2 x 3]`: + +``` + output = + [ + [1.0, 0.0, 0.0] // one_hot(0) + [0.0, 0.0, 1.0] // one_hot(2) + ][ + [0.0, 1.0, 0.0] // one_hot(1) + [0.0, 0.0, 0.0] // one_hot(-1) + ] +``` + +Using default values for `on_value` and `off_value`: + +``` + indices = [0, 1, 2] + depth = 3 +``` + +The output will be + +``` + output = + [[1., 0., 0.], + [0., 1., 0.], + [0., 0., 1.]] +``` + +##### Args: + + +* `indices`: A `Tensor` of indices. +* `depth`: A scalar defining the depth of the one hot dimension. +* `on_value`: A scalar defining the value to fill in output when `indices[j] + = i`. (default: 1) +* `off_value`: A scalar defining the value to fill in output when `indices[j] + != i`. (default: 0) +* `axis`: The axis to fill (default: -1, a new inner-most axis). +* `dtype`: The data type of the output tensor. + +##### Returns: + + +* `output`: The one-hot tensor. + +##### Raises: + + +* `TypeError`: If dtype of either `on_value` or `off_value` don't match `dtype` +* `TypeError`: If dtype of `on_value` and `off_value` don't match one another + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.op_scope.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.op_scope.md deleted file mode 100644 index c1002fd125..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.op_scope.md +++ /dev/null @@ -1,36 +0,0 @@ -### `tf.op_scope(values, name, default_name=None)` {#op_scope} - -Returns a context manager for use when defining a Python op. - -This context manager validates that the given `values` are from the -same graph, ensures that graph is the default graph, and pushes a -name scope. - -For example, to define a new Python op called `my_op`: - -```python -def my_op(a, b, c, name=None): - with tf.op_scope([a, b, c], name, "MyOp") as scope: - a = tf.convert_to_tensor(a, name="a") - b = tf.convert_to_tensor(b, name="b") - c = tf.convert_to_tensor(c, name="c") - # Define some computation that uses `a`, `b`, and `c`. - return foo_op(..., name=scope) -``` - -##### Args: - - -* `values`: The list of `Tensor` arguments that are passed to the op function. -* `name`: The name argument that is passed to the op function. -* `default_name`: The default name to use if the `name` argument is `None`. - -##### Returns: - - A context manager for use in defining Python ops. Yields the name scope. - -##### Raises: - - -* `ValueError`: if neither `name` nor `default_name` is provided. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.parse_example.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.parse_example.md new file mode 100644 index 0000000000..2f2f511196 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.parse_example.md @@ -0,0 +1,153 @@ +### `tf.parse_example(serialized, features, name=None, example_names=None)` {#parse_example} + +Parses `Example` protos into a `dict` of tensors. + +Parses a number of serialized [`Example`] +(https://www.tensorflow.org/code/tensorflow/core/example/example.proto) +protos given in `serialized`. + +`example_names` may contain descriptive names for the corresponding serialized +protos. These may be useful for debugging purposes, but they have no effect on +the output. If not `None`, `example_names` must be the same length as `serialized`. + +This op parses serialized examples into a dictionary mapping keys to `Tensor` +and `SparseTensor` objects. `features` is a dict from keys to `VarLenFeature` +and `FixedLenFeature` objects. Each `VarLenFeature` is mapped to a +`SparseTensor`, and each `FixedLenFeature` is mapped to a `Tensor`. + +Each `VarLenFeature` maps to a `SparseTensor` of the specified type +representing a ragged matrix. Its indices are `[batch, index]` where `batch` +is the batch entry the value is from in `serialized`, and `index` is the +value's index in the list of values associated with that feature and example. + +Each `FixedLenFeature` `df` maps to a `Tensor` of the specified type (or +`tf.float32` if not specified) and shape `(serialized.size(),) + df.shape`. + +`FixedLenFeature` entries with a `default_value` are optional. With no default +value, we will fail if that `Feature` is missing from any example in +`serialized`. + +Examples: + +For example, if one expects a `tf.float32` sparse feature `ft` and three +serialized `Example`s are provided: + +``` +serialized = [ + features + { feature { key: "ft" value { float_list { value: [1.0, 2.0] } } } }, + features + { feature []}, + features + { feature { key: "ft" value { float_list { value: [3.0] } } } +] +``` + +then the output will look like: + +``` +{"ft": SparseTensor(indices=[[0, 0], [0, 1], [2, 0]], + values=[1.0, 2.0, 3.0], + shape=(3, 2)) } +``` + +Given two `Example` input protos in `serialized`: + +``` +[ + features { + feature { key: "kw" value { bytes_list { value: [ "knit", "big" ] } } } + feature { key: "gps" value { float_list { value: [] } } } + }, + features { + feature { key: "kw" value { bytes_list { value: [ "emmy" ] } } } + feature { key: "dank" value { int64_list { value: [ 42 ] } } } + feature { key: "gps" value { } } + } +] +``` + +And arguments + +``` +example_names: ["input0", "input1"], +features: { + "kw": VarLenFeature(tf.string), + "dank": VarLenFeature(tf.int64), + "gps": VarLenFeature(tf.float32), +} +``` + +Then the output is a dictionary: + +```python +{ + "kw": SparseTensor( + indices=[[0, 0], [0, 1], [1, 0]], + values=["knit", "big", "emmy"] + shape=[2, 2]), + "dank": SparseTensor( + indices=[[1, 0]], + values=[42], + shape=[2, 1]), + "gps": SparseTensor( + indices=[], + values=[], + shape=[2, 0]), +} +``` + +For dense results in two serialized `Example`s: + +``` +[ + features { + feature { key: "age" value { int64_list { value: [ 0 ] } } } + feature { key: "gender" value { bytes_list { value: [ "f" ] } } } + }, + features { + feature { key: "age" value { int64_list { value: [] } } } + feature { key: "gender" value { bytes_list { value: [ "f" ] } } } + } +] +``` + +We can use arguments: + +``` +example_names: ["input0", "input1"], +features: { + "age": FixedLenFeature([], dtype=tf.int64, default_value=-1), + "gender": FixedLenFeature([], dtype=tf.string), +} +``` + +And the expected output is: + +```python +{ + "age": [[0], [-1]], + "gender": [["f"], ["f"]], +} +``` + +##### Args: + + +* `serialized`: A vector (1-D Tensor) of strings, a batch of binary + serialized `Example` protos. +* `features`: A `dict` mapping feature keys to `FixedLenFeature` or + `VarLenFeature` values. +* `name`: A name for this operation (optional). +* `example_names`: A vector (1-D Tensor) of strings (optional), the names of + the serialized protos in the batch. + +##### Returns: + + A `dict` mapping feature keys to `Tensor` and `SparseTensor` values. + +##### Raises: + + +* `ValueError`: if any feature is invalid. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.random_shuffle.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.random_shuffle.md new file mode 100644 index 0000000000..14f40d64af --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.random_shuffle.md @@ -0,0 +1,29 @@ +### `tf.random_shuffle(value, seed=None, name=None)` {#random_shuffle} + +Randomly shuffles a tensor along its first dimension. + +The tensor is shuffled along dimension 0, such that each `value[j]` is mapped +to one and only one `output[i]`. For example, a mapping that might occur for a +3x2 tensor is: + +```python +[[1, 2], [[5, 6], + [3, 4], ==> [1, 2], + [5, 6]] [3, 4]] +``` + +##### Args: + + +* `value`: A Tensor to be shuffled. +* `seed`: A Python integer. Used to create a random seed for the distribution. + See + [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) + for behavior. +* `name`: A name for the operation (optional). + +##### Returns: + + A tensor of same shape and type as `value`, shuffled along its first + dimension. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.reduce_join.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.reduce_join.md deleted file mode 100644 index c65c6022ba..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.reduce_join.md +++ /dev/null @@ -1,49 +0,0 @@ -### `tf.reduce_join(inputs, reduction_indices, keep_dims=None, separator=None, name=None)` {#reduce_join} - -Joins a string Tensor across the given dimensions. - -Computes the string join across dimensions in the given string Tensor of shape -`[d_0, d_1, ..., d_n-1]`. Returns a new Tensor created by joining the input -strings with the given separator (default: empty string). Negative indices are -counted backwards from the end, with `-1` being equivalent to `n - 1`. Passing -an empty `reduction_indices` joins all strings in linear index order and outputs -a scalar string. - - -For example: -``` -# tensor `a` is [["a", "b"], ["c", "d"]] -tf.reduce_join(a, 0) ==> ["ac", "bd"] -tf.reduce_join(a, 1) ==> ["ab", "cd"] -tf.reduce_join(a, -2) = tf.reduce_join(a, 0) ==> ["ac", "bd"] -tf.reduce_join(a, -1) = tf.reduce_join(a, 1) ==> ["ab", "cd"] -tf.reduce_join(a, 0, keep_dims=True) ==> [["ac", "bd"]] -tf.reduce_join(a, 1, keep_dims=True) ==> [["ab"], ["cd"]] -tf.reduce_join(a, 0, separator=".") ==> ["a.c", "b.d"] -tf.reduce_join(a, [0, 1]) ==> ["acbd"] -tf.reduce_join(a, [1, 0]) ==> ["abcd"] -tf.reduce_join(a, []) ==> ["abcd"] -``` - -##### Args: - - -* `inputs`: A `Tensor` of type `string`. - The input to be joined. All reduced indices must have non-zero size. -* `reduction_indices`: A `Tensor` of type `int32`. - The dimensions to reduce over. Dimensions are reduced in the - order specified. If `reduction_indices` has higher rank than `1`, it is - flattened. Omitting `reduction_indices` is equivalent to passing - `[n-1, n-2, ..., 0]`. Negative indices from `-n` to `-1` are supported. -* `keep_dims`: An optional `bool`. Defaults to `False`. - If `True`, retain reduced dimensions with length `1`. -* `separator`: An optional `string`. Defaults to `""`. - The separator to use when joining. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `string`. - Has shape equal to that of the input with reduced dimensions removed or - set to `1` depending on `keep_dims`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.reduce_max.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.reduce_max.md new file mode 100644 index 0000000000..f137e8091c --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.reduce_max.md @@ -0,0 +1,25 @@ +### `tf.reduce_max(input_tensor, reduction_indices=None, keep_dims=False, name=None)` {#reduce_max} + +Computes the maximum of elements across dimensions of a tensor. + +Reduces `input_tensor` along the dimensions given in `reduction_indices`. +Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each +entry in `reduction_indices`. If `keep_dims` is true, the reduced dimensions +are retained with length 1. + +If `reduction_indices` has no entries, all dimensions are reduced, and a +tensor with a single element is returned. + +##### Args: + + +* `input_tensor`: The tensor to reduce. Should have numeric type. +* `reduction_indices`: The dimensions to reduce. If `None` (the default), + reduces all dimensions. +* `keep_dims`: If true, retains reduced dimensions with length 1. +* `name`: A name for the operation (optional). + +##### Returns: + + The reduced tensor. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.reduce_min.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.reduce_min.md new file mode 100644 index 0000000000..c93a902adc --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.reduce_min.md @@ -0,0 +1,25 @@ +### `tf.reduce_min(input_tensor, reduction_indices=None, keep_dims=False, name=None)` {#reduce_min} + +Computes the minimum of elements across dimensions of a tensor. + +Reduces `input_tensor` along the dimensions given in `reduction_indices`. +Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each +entry in `reduction_indices`. If `keep_dims` is true, the reduced dimensions +are retained with length 1. + +If `reduction_indices` has no entries, all dimensions are reduced, and a +tensor with a single element is returned. + +##### Args: + + +* `input_tensor`: The tensor to reduce. Should have numeric type. +* `reduction_indices`: The dimensions to reduce. If `None` (the default), + reduces all dimensions. +* `keep_dims`: If true, retains reduced dimensions with length 1. +* `name`: A name for the operation (optional). + +##### Returns: + + The reduced tensor. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.register_tensor_conversion_function.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.register_tensor_conversion_function.md deleted file mode 100644 index e15dcf7b40..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.register_tensor_conversion_function.md +++ /dev/null @@ -1,42 +0,0 @@ -### `tf.register_tensor_conversion_function(base_type, conversion_func, priority=100)` {#register_tensor_conversion_function} - -Registers a function for converting objects of `base_type` to `Tensor`. - -The conversion function must have the following signature: - - def conversion_func(value, dtype=None, name=None, as_ref=False): - # ... - -It must return a `Tensor` with the given `dtype` if specified. If the -conversion function creates a new `Tensor`, it should use the given -`name` if specified. All exceptions will be propagated to the caller. - -The conversion function may return `NotImplemented` for some -inputs. In this case, the conversion process will continue to try -subsequent conversion functions. - -If `as_ref` is true, the function must return a `Tensor` reference, -such as a `Variable`. - -NOTE: The conversion functions will execute in order of priority, -followed by order of registration. To ensure that a conversion function -`F` runs before another conversion function `G`, ensure that `F` is -registered with a smaller priority than `G`. - -##### Args: - - -* `base_type`: The base type or tuple of base types for all objects that - `conversion_func` accepts. -* `conversion_func`: A function that converts instances of `base_type` to - `Tensor`. -* `priority`: Optional integer that indicates the priority for applying this - conversion function. Conversion functions with smaller priority values - run earlier than conversion functions with larger priority values. - Defaults to 100. - -##### Raises: - - -* `TypeError`: If the arguments do not have the appropriate type. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.reshape.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.reshape.md new file mode 100644 index 0000000000..057b29e91f --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.reshape.md @@ -0,0 +1,72 @@ +### `tf.reshape(tensor, shape, name=None)` {#reshape} + +Reshapes a tensor. + +Given `tensor`, this operation returns a tensor that has the same values +as `tensor` with shape `shape`. + +If one component of `shape` is the special value -1, the size of that dimension +is computed so that the total size remains constant. In particular, a `shape` +of `[-1]` flattens into 1-D. At most one component of `shape` can be -1. + +If `shape` is 1-D or higher, then the operation returns a tensor with shape +`shape` filled with the values of `tensor`. In this case, the number of elements +implied by `shape` must be the same as the number of elements in `tensor`. + +For example: + +```prettyprint +# tensor 't' is [1, 2, 3, 4, 5, 6, 7, 8, 9] +# tensor 't' has shape [9] +reshape(t, [3, 3]) ==> [[1, 2, 3], + [4, 5, 6], + [7, 8, 9]] + +# tensor 't' is [[[1, 1], [2, 2]], +# [[3, 3], [4, 4]]] +# tensor 't' has shape [2, 2, 2] +reshape(t, [2, 4]) ==> [[1, 1, 2, 2], + [3, 3, 4, 4]] + +# tensor 't' is [[[1, 1, 1], +# [2, 2, 2]], +# [[3, 3, 3], +# [4, 4, 4]], +# [[5, 5, 5], +# [6, 6, 6]]] +# tensor 't' has shape [3, 2, 3] +# pass '[-1]' to flatten 't' +reshape(t, [-1]) ==> [1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6] + +# -1 can also be used to infer the shape + +# -1 is inferred to be 9: +reshape(t, [2, -1]) ==> [[1, 1, 1, 2, 2, 2, 3, 3, 3], + [4, 4, 4, 5, 5, 5, 6, 6, 6]] +# -1 is inferred to be 2: +reshape(t, [-1, 9]) ==> [[1, 1, 1, 2, 2, 2, 3, 3, 3], + [4, 4, 4, 5, 5, 5, 6, 6, 6]] +# -1 is inferred to be 3: +reshape(t, [ 2, -1, 3]) ==> [[[1, 1, 1], + [2, 2, 2], + [3, 3, 3]], + [[4, 4, 4], + [5, 5, 5], + [6, 6, 6]]] + +# tensor 't' is [7] +# shape `[]` reshapes to a scalar +reshape(t, []) ==> 7 +``` + +##### Args: + + +* `tensor`: A `Tensor`. +* `shape`: A `Tensor` of type `int32`. Defines the shape of the output tensor. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `tensor`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.round.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.round.md new file mode 100644 index 0000000000..8d2ce32921 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.round.md @@ -0,0 +1,21 @@ +### `tf.round(x, name=None)` {#round} + +Rounds the values of a tensor to the nearest integer, element-wise. + +For example: + +```python +# 'a' is [0.9, 2.5, 2.3, -4.4] +tf.round(a) ==> [ 1.0, 3.0, 2.0, -4.0 ] +``` + +##### Args: + + +* `x`: A `Tensor` of type `float` or `double`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of same shape and type as `x`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.saturate_cast.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.saturate_cast.md deleted file mode 100644 index 6a77c2791e..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.saturate_cast.md +++ /dev/null @@ -1,19 +0,0 @@ -### `tf.saturate_cast(value, dtype, name=None)` {#saturate_cast} - -Performs a safe saturating cast of `value` to `dtype`. - -This function casts the input to `dtype` without applying any scaling. If -there is a danger that values would over or underflow in the cast, this op -applies the appropriate clamping before the cast. - -##### Args: - - -* `value`: A `Tensor`. -* `dtype`: The desired output `DType`. -* `name`: A name for the operation (optional). - -##### Returns: - - `value` safely cast to `dtype`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.scatter_update.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.scatter_update.md deleted file mode 100644 index f865b8e9e8..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.scatter_update.md +++ /dev/null @@ -1,46 +0,0 @@ -### `tf.scatter_update(ref, indices, updates, use_locking=None, name=None)` {#scatter_update} - -Applies sparse updates to a variable reference. - -This operation computes - - # Scalar indices - ref[indices, ...] = updates[...] - - # Vector indices (for each i) - ref[indices[i], ...] = updates[i, ...] - - # High rank indices (for each i, ..., j) - ref[indices[i, ..., j], ...] = updates[i, ..., j, ...] - -This operation outputs `ref` after the update is done. -This makes it easier to chain operations that need to use the reset value. - -If values in `ref` is to be updated more than once, because there are -duplicate entires in `indices`, the order at which the updates happen -for each value is undefined. - -Requires `updates.shape = indices.shape + ref.shape[1:]`. - -
- -
- -##### Args: - - -* `ref`: A mutable `Tensor`. Should be from a `Variable` node. -* `indices`: A `Tensor`. Must be one of the following types: `int32`, `int64`. - A tensor of indices into the first dimension of `ref`. -* `updates`: A `Tensor`. Must have the same type as `ref`. - A tensor of updated values to store in `ref`. -* `use_locking`: An optional `bool`. Defaults to `True`. - If True, the assignment will be protected by a lock; - otherwise the behavior is undefined, but may exhibit less contention. -* `name`: A name for the operation (optional). - -##### Returns: - - Same as `ref`. Returned as a convenience for operations that want - to use the updated values after the update is done. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.segment_min.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.segment_min.md new file mode 100644 index 0000000000..5cacf2cf72 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.segment_min.md @@ -0,0 +1,31 @@ +### `tf.segment_min(data, segment_ids, name=None)` {#segment_min} + +Computes the minimum along segments of a tensor. + +Read [the section on +Segmentation](../../api_docs/python/math_ops.md#segmentation) for an explanation +of segments. + +Computes a tensor such that +\\(output_i = \min_j(data_j)\\) where `min` is over `j` such +that `segment_ids[j] == i`. + +
+ +
+ +##### Args: + + +* `data`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`. +* `segment_ids`: A `Tensor`. Must be one of the following types: `int32`, `int64`. + A 1-D tensor whose rank is equal to the rank of `data`'s + first dimension. Values should be sorted and can be repeated. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `data`. + Has same shape as data, except for dimension 0 which + has size `k`, the number of segments. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.select.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.select.md deleted file mode 100644 index b77c9612e8..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.select.md +++ /dev/null @@ -1,56 +0,0 @@ -### `tf.select(condition, t, e, name=None)` {#select} - -Selects elements from `t` or `e`, depending on `condition`. - -The `t`, and `e` tensors must all have the same shape, -and the output will also have that shape. The `condition` tensor -must be a scalar if `t` and `e` are scalars. If `t` and `e` are vectors -or higher rank, then `condition` must be either a vector with size -matching the first dimension of `t`, or must have the same shape as `t`. - -The `condition` tensor acts as a mask that chooses, based on the value at each -element, whether the corresponding element / row in the output should be -taken from `t` (if true) or `e` (if false). - -If `condition` is a vector and `t` and `e` are higher rank matrices, then -it chooses which row (outer dimension) to copy from `t` and `e`. -If `condition` has the same shape as `t` and `e`, then it chooses which -element to copy from `t` and `e`. - -For example: - -```prettyprint -# 'condition' tensor is [[True, False] -# [False, True]] -# 't' is [[1, 2], -# [3, 4]] -# 'e' is [[5, 6], -# [7, 8]] -select(condition, t, e) ==> [[1, 6], - [7, 4]] - - -# 'condition' tensor is [True, False] -# 't' is [[1, 2], -# [3, 4]] -# 'e' is [[5, 6], -# [7, 8]] -select(condition, t, e) ==> [[1, 2], - [7, 8]] - -``` - -##### Args: - - -* `condition`: A `Tensor` of type `bool`. -* `t`: A `Tensor` which may have the same shape as `condition`. - If `condition` is rank 1, `t` may have higher rank, - but its first dimension must match the size of `condition`. -* `e`: A `Tensor` with the same type and shape as `t`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` with the same type and shape as `t` and `e`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.sigmoid.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.sigmoid.md new file mode 100644 index 0000000000..b056a48716 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.sigmoid.md @@ -0,0 +1,18 @@ +### `tf.sigmoid(x, name=None)` {#sigmoid} + +Computes sigmoid of `x` element-wise. + +Specifically, `y = 1 / (1 + exp(-x))`. + +##### Args: + + +* `x`: A Tensor with type `float`, `double`, `int32`, `complex64`, `int64`, + or `qint32`. +* `name`: A name for the operation (optional). + +##### Returns: + + A Tensor with the same type as `x` if `x.dtype != qint32` + otherwise the return type is `quint8`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.space_to_batch.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.space_to_batch.md new file mode 100644 index 0000000000..1999f21ea3 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.space_to_batch.md @@ -0,0 +1,44 @@ +### `tf.space_to_batch(input, paddings, block_size, name=None)` {#space_to_batch} + +SpaceToBatch for 4-D tensors of type T. + +Zero-pads and then rearranges (permutes) blocks of spatial data into batch. +More specifically, this op outputs a copy of the input tensor where values from +the `height` and `width` dimensions are moved to the `batch` dimension. After +the zero-padding, both `height` and `width` of the input must be divisible by the +block size. + +##### Args: + + +* `input`: A `Tensor`. 4-D with shape `[batch, height, width, depth]`. +* `paddings`: A `Tensor` of type `int32`. + 2-D tensor of non-negative integers with shape `[2, 2]`. It specifies + the padding of the input with zeros across the spatial dimensions as follows: + + paddings = [[pad_top, pad_bottom], [pad_left, pad_right]] + + The effective spatial dimensions of the zero-padded input tensor will be: + + height_pad = pad_top + height + pad_bottom + width_pad = pad_left + width + pad_right + + The attr `block_size` must be greater than one. It indicates the block size. + + * Non-overlapping blocks of size `block_size x block size` in the height and + width dimensions are rearranged into the batch dimension at each location. + * The batch of the output tensor is `batch * block_size * block_size`. + * Both height_pad and width_pad must be divisible by block_size. + + The shape of the output will be: + + [batch*block_size*block_size, height_pad/block_size, width_pad/block_size, + depth] + +* `block_size`: An `int`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `input`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.sparse_retain.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.sparse_retain.md new file mode 100644 index 0000000000..dcaa303627 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.sparse_retain.md @@ -0,0 +1,33 @@ +### `tf.sparse_retain(sp_input, to_retain)` {#sparse_retain} + +Retains specified non-empty values within a `SparseTensor`. + +For example, if `sp_input` has shape `[4, 5]` and 4 non-empty string values: + + [0, 1]: a + [0, 3]: b + [2, 0]: c + [3, 1]: d + +and `to_retain = [True, False, False, True]`, then the output will +be a `SparseTensor` of shape `[4, 5]` with 2 non-empty values: + + [0, 1]: a + [3, 1]: d + +##### Args: + + +* `sp_input`: The input `SparseTensor` with `N` non-empty elements. +* `to_retain`: A bool vector of length `N` with `M` true values. + +##### Returns: + + A `SparseTensor` with the same shape as the input and `M` non-empty + elements corresponding to the true positions in `to_retain`. + +##### Raises: + + +* `TypeError`: If `sp_input` is not a `SparseTensor`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.sparse_tensor_dense_matmul.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.sparse_tensor_dense_matmul.md new file mode 100644 index 0000000000..5bb99ef029 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.sparse_tensor_dense_matmul.md @@ -0,0 +1,163 @@ +### `tf.sparse_tensor_dense_matmul(sp_a, b, adjoint_a=False, adjoint_b=False, name=None)` {#sparse_tensor_dense_matmul} + +Multiply SparseTensor (of rank 2) "A" by dense matrix "B". + +No validity checking is performed on the indices of A. However, the following +input format is recommended for optimal behavior: + +if adjoint_a == false: + A should be sorted in lexicographically increasing order. Use + sparse_reorder if you're not sure. +if adjoint_a == true: + A should be sorted in order of increasing dimension 1 (i.e., "column major" + order instead of "row major" order). + +Deciding when to use sparse_tensor_dense_matmul vs. matmul(sp_a=True): + +There are a number of questions to ask in the decision process, including: + +* Will the SparseTensor A fit in memory if densified? +* Is the column count of the product large (>> 1)? +* Is the density of A larger than approximately 15%? + +If the answer to several of these questions is yes, consider +converting the SparseTensor to a dense one and using tf.matmul with sp_a=True. + +This operation tends to perform well when A is more sparse, if the column size +of the product is small (e.g. matrix-vector multiplication), if sp_a.shape +takes on large values. + +Below is a rough speed comparison between sparse_tensor_dense_matmul, +labelled 'sparse', and matmul(sp_a=True), labelled 'dense'. For purposes of +the comparison, the time spent converting from a SparseTensor to a dense +Tensor is not included, so it is overly conservative with respect to +the time ratio. + +Benchmark system: +CPU: Intel Ivybridge with HyperThreading (6 cores) dL1:32KB dL2:256KB dL3:12MB +GPU: NVidia Tesla k40c + +Compiled with: +-c opt --config=cuda --copt=-mavx + +```tensorflow/python/sparse_tensor_dense_matmul_op_test --benchmarks +A sparse [m, k] with % nonzero values between 1% and 80% +B dense [k, n] + +% nnz n gpu m k dt(dense) dt(sparse) dt(sparse)/dt(dense) +0.01 1 True 100 100 0.000221166 0.00010154 0.459112 +0.01 1 True 100 1000 0.00033858 0.000109275 0.322745 +0.01 1 True 1000 100 0.000310557 9.85661e-05 0.317385 +0.01 1 True 1000 1000 0.0008721 0.000100875 0.115669 +0.01 1 False 100 100 0.000208085 0.000107603 0.51711 +0.01 1 False 100 1000 0.000327112 9.51118e-05 0.290762 +0.01 1 False 1000 100 0.000308222 0.00010345 0.335635 +0.01 1 False 1000 1000 0.000865721 0.000101397 0.117124 +0.01 10 True 100 100 0.000218522 0.000105537 0.482958 +0.01 10 True 100 1000 0.000340882 0.000111641 0.327506 +0.01 10 True 1000 100 0.000315472 0.000117376 0.372064 +0.01 10 True 1000 1000 0.000905493 0.000123263 0.136128 +0.01 10 False 100 100 0.000221529 9.82571e-05 0.44354 +0.01 10 False 100 1000 0.000330552 0.000112615 0.340687 +0.01 10 False 1000 100 0.000341277 0.000114097 0.334324 +0.01 10 False 1000 1000 0.000819944 0.000120982 0.147549 +0.01 25 True 100 100 0.000207806 0.000105977 0.509981 +0.01 25 True 100 1000 0.000322879 0.00012921 0.400181 +0.01 25 True 1000 100 0.00038262 0.000141583 0.370035 +0.01 25 True 1000 1000 0.000865438 0.000202083 0.233504 +0.01 25 False 100 100 0.000209401 0.000104696 0.499979 +0.01 25 False 100 1000 0.000321161 0.000130737 0.407076 +0.01 25 False 1000 100 0.000377012 0.000136801 0.362856 +0.01 25 False 1000 1000 0.000861125 0.00020272 0.235413 +0.2 1 True 100 100 0.000206952 9.69219e-05 0.46833 +0.2 1 True 100 1000 0.000348674 0.000147475 0.422959 +0.2 1 True 1000 100 0.000336908 0.00010122 0.300439 +0.2 1 True 1000 1000 0.001022 0.000203274 0.198898 +0.2 1 False 100 100 0.000207532 9.5412e-05 0.459746 +0.2 1 False 100 1000 0.000356127 0.000146824 0.41228 +0.2 1 False 1000 100 0.000322664 0.000100918 0.312764 +0.2 1 False 1000 1000 0.000998987 0.000203442 0.203648 +0.2 10 True 100 100 0.000211692 0.000109903 0.519165 +0.2 10 True 100 1000 0.000372819 0.000164321 0.440753 +0.2 10 True 1000 100 0.000338651 0.000144806 0.427596 +0.2 10 True 1000 1000 0.00108312 0.000758876 0.70064 +0.2 10 False 100 100 0.000215727 0.000110502 0.512231 +0.2 10 False 100 1000 0.000375419 0.0001613 0.429653 +0.2 10 False 1000 100 0.000336999 0.000145628 0.432132 +0.2 10 False 1000 1000 0.00110502 0.000762043 0.689618 +0.2 25 True 100 100 0.000218705 0.000129913 0.594009 +0.2 25 True 100 1000 0.000394794 0.00029428 0.745402 +0.2 25 True 1000 100 0.000404483 0.0002693 0.665788 +0.2 25 True 1000 1000 0.0012002 0.00194494 1.62052 +0.2 25 False 100 100 0.000221494 0.0001306 0.589632 +0.2 25 False 100 1000 0.000396436 0.000297204 0.74969 +0.2 25 False 1000 100 0.000409346 0.000270068 0.659754 +0.2 25 False 1000 1000 0.00121051 0.00193737 1.60046 +0.5 1 True 100 100 0.000214981 9.82111e-05 0.456836 +0.5 1 True 100 1000 0.000415328 0.000223073 0.537101 +0.5 1 True 1000 100 0.000358324 0.00011269 0.314492 +0.5 1 True 1000 1000 0.00137612 0.000437401 0.317851 +0.5 1 False 100 100 0.000224196 0.000101423 0.452386 +0.5 1 False 100 1000 0.000400987 0.000223286 0.556841 +0.5 1 False 1000 100 0.000368825 0.00011224 0.304318 +0.5 1 False 1000 1000 0.00136036 0.000429369 0.31563 +0.5 10 True 100 100 0.000222125 0.000112308 0.505608 +0.5 10 True 100 1000 0.000461088 0.00032357 0.701753 +0.5 10 True 1000 100 0.000394624 0.000225497 0.571422 +0.5 10 True 1000 1000 0.00158027 0.00190898 1.20801 +0.5 10 False 100 100 0.000232083 0.000114978 0.495418 +0.5 10 False 100 1000 0.000454574 0.000324632 0.714146 +0.5 10 False 1000 100 0.000379097 0.000227768 0.600817 +0.5 10 False 1000 1000 0.00160292 0.00190168 1.18638 +0.5 25 True 100 100 0.00023429 0.000151703 0.647501 +0.5 25 True 100 1000 0.000497462 0.000598873 1.20386 +0.5 25 True 1000 100 0.000460778 0.000557038 1.20891 +0.5 25 True 1000 1000 0.00170036 0.00467336 2.74845 +0.5 25 False 100 100 0.000228981 0.000155334 0.678371 +0.5 25 False 100 1000 0.000496139 0.000620789 1.25124 +0.5 25 False 1000 100 0.00045473 0.000551528 1.21287 +0.5 25 False 1000 1000 0.00171793 0.00467152 2.71927 +0.8 1 True 100 100 0.000222037 0.000105301 0.47425 +0.8 1 True 100 1000 0.000410804 0.000329327 0.801664 +0.8 1 True 1000 100 0.000349735 0.000131225 0.375212 +0.8 1 True 1000 1000 0.00139219 0.000677065 0.48633 +0.8 1 False 100 100 0.000214079 0.000107486 0.502085 +0.8 1 False 100 1000 0.000413746 0.000323244 0.781261 +0.8 1 False 1000 100 0.000348983 0.000131983 0.378193 +0.8 1 False 1000 1000 0.00136296 0.000685325 0.50282 +0.8 10 True 100 100 0.000229159 0.00011825 0.516017 +0.8 10 True 100 1000 0.000498845 0.000532618 1.0677 +0.8 10 True 1000 100 0.000383126 0.00029935 0.781336 +0.8 10 True 1000 1000 0.00162866 0.00307312 1.88689 +0.8 10 False 100 100 0.000230783 0.000124958 0.541452 +0.8 10 False 100 1000 0.000493393 0.000550654 1.11606 +0.8 10 False 1000 100 0.000377167 0.000298581 0.791642 +0.8 10 False 1000 1000 0.00165795 0.00305103 1.84024 +0.8 25 True 100 100 0.000233496 0.000175241 0.75051 +0.8 25 True 100 1000 0.00055654 0.00102658 1.84458 +0.8 25 True 1000 100 0.000463814 0.000783267 1.68875 +0.8 25 True 1000 1000 0.00186905 0.00755344 4.04132 +0.8 25 False 100 100 0.000240243 0.000175047 0.728625 +0.8 25 False 100 1000 0.000578102 0.00104499 1.80763 +0.8 25 False 1000 100 0.000485113 0.000776849 1.60138 +0.8 25 False 1000 1000 0.00211448 0.00752736 3.55992 +``` + +##### Args: + + +* `sp_a`: SparseTensor A, of rank 2. +* `b`: A dense Matrix with the same dtype as sp_a. +* `adjoint_a`: Use the adjoint of A in the matrix multiply. If A is complex, + this is transpose(conj(A)). Otherwise it's transpose(A). +* `adjoint_b`: Use the adjoint of B in the matrix multiply. If B is complex, + this is transpose(conj(B)). Otherwise it's transpose(B). +* `name`: A name prefix for the returned tensors (optional) + +##### Returns: + + A dense matrix (pseudo-code in dense np.matrix notation): + A = A.H if adjoint_a else A + B = B.H if adjoint_b else B + return A*B + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.sparse_tensor_to_dense.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.sparse_tensor_to_dense.md deleted file mode 100644 index a0c0a6ca9c..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.sparse_tensor_to_dense.md +++ /dev/null @@ -1,43 +0,0 @@ -### `tf.sparse_tensor_to_dense(sp_input, default_value=0, validate_indices=True, name=None)` {#sparse_tensor_to_dense} - -Converts a `SparseTensor` into a dense tensor. - -This op is a convenience wrapper around `sparse_to_dense` for `SparseTensor`s. - -For example, if `sp_input` has shape `[3, 5]` and non-empty string values: - - [0, 1]: a - [0, 3]: b - [2, 0]: c - -and `default_value` is `x`, then the output will be a dense `[3, 5]` -string tensor with values: - - [[x a x b x] - [x x x x x] - [c x x x x]] - -Indices must be without repeats. This is only -tested if validate_indices is True. - -##### Args: - - -* `sp_input`: The input `SparseTensor`. -* `default_value`: Scalar value to set for indices not specified in - `sp_input`. Defaults to zero. -* `validate_indices`: A boolean value. If `True`, indices are checked to make - sure they are sorted in lexicographic order and that there are no repeats. -* `name`: A name prefix for the returned tensors (optional). - -##### Returns: - - A dense tensor with shape `sp_input.shape` and values specified by - the non-empty values in `sp_input`. Indices not in `sp_input` are assigned - `default_value`. - -##### Raises: - - -* `TypeError`: If `sp_input` is not a `SparseTensor`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.sparse_to_dense.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.sparse_to_dense.md new file mode 100644 index 0000000000..d4df5a9183 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.sparse_to_dense.md @@ -0,0 +1,45 @@ +### `tf.sparse_to_dense(sparse_indices, output_shape, sparse_values, default_value=0, validate_indices=True, name=None)` {#sparse_to_dense} + +Converts a sparse representation into a dense tensor. + +Builds an array `dense` with shape `output_shape` such that + +```python +# If sparse_indices is scalar +dense[i] = (i == sparse_indices ? sparse_values : default_value) + +# If sparse_indices is a vector, then for each i +dense[sparse_indices[i]] = sparse_values[i] + +# If sparse_indices is an n by d matrix, then for each i in [0, n) +dense[sparse_indices[i][0], ..., sparse_indices[i][d-1]] = sparse_values[i] +``` + +All other values in `dense` are set to `default_value`. If `sparse_values` +is a scalar, all sparse indices are set to this single value. + +Indices should be sorted in lexicographic order, and indices must not +contain any repeats. If `validate_indices` is True, these properties +are checked during execution. + +##### Args: + + +* `sparse_indices`: A 0-D, 1-D, or 2-D `Tensor` of type `int32` or `int64`. + `sparse_indices[i]` contains the complete index where `sparse_values[i]` + will be placed. +* `output_shape`: A 1-D `Tensor` of the same type as `sparse_indices`. Shape + of the dense output tensor. +* `sparse_values`: A 0-D or 1-D `Tensor`. Values corresponding to each row of + `sparse_indices`, or a scalar value to be used for all sparse indices. +* `default_value`: A 0-D `Tensor` of the same type as `sparse_values`. Value + to set for indices not specified in `sparse_indices`. Defaults to zero. +* `validate_indices`: A boolean value. If True, indices are checked to make + sure they are sorted in lexicographic order and that there are no repeats. +* `name`: A name for the operation (optional). + +##### Returns: + + Dense `Tensor` of shape `output_shape`. Has the same type as + `sparse_values`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.string_to_hash_bucket_fast.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.string_to_hash_bucket_fast.md deleted file mode 100644 index e684058326..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.string_to_hash_bucket_fast.md +++ /dev/null @@ -1,23 +0,0 @@ -### `tf.string_to_hash_bucket_fast(input, num_buckets, name=None)` {#string_to_hash_bucket_fast} - -Converts each string in the input Tensor to its hash mod by a number of buckets. - -The hash function is deterministic on the content of the string within the -process and will never change. However, it is not suitable for cryptography. -This function may be used when CPU time is scarce and inputs are trusted or -unimportant. There is a risk of adversaries constructing inputs that all hash -to the same bucket. To prevent this problem, use a strong hash function with -`tf.string_to_hash_bucket_strong`. - -##### Args: - - -* `input`: A `Tensor` of type `string`. The strings to assign a hash bucket. -* `num_buckets`: An `int` that is `>= 1`. The number of buckets. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `int64`. - A Tensor of the same shape as the input `string_tensor`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.string_to_number.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.string_to_number.md new file mode 100644 index 0000000000..dfbc0c6b6c --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.string_to_number.md @@ -0,0 +1,20 @@ +### `tf.string_to_number(string_tensor, out_type=None, name=None)` {#string_to_number} + +Converts each string in the input Tensor to the specified numeric type. + +(Note that int32 overflow results in an error while float overflow +results in a rounded value.) + +##### Args: + + +* `string_tensor`: A `Tensor` of type `string`. +* `out_type`: An optional `tf.DType` from: `tf.float32, tf.int32`. Defaults to `tf.float32`. + The numeric type to interpret each string in string_tensor as. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `out_type`. + A Tensor of the same shape as the input `string_tensor`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.test.compute_gradient_error.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.test.compute_gradient_error.md deleted file mode 100644 index d2c91a66b3..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.test.compute_gradient_error.md +++ /dev/null @@ -1,36 +0,0 @@ -### `tf.test.compute_gradient_error(x, x_shape, y, y_shape, x_init_value=None, delta=0.001, init_targets=None)` {#compute_gradient_error} - -Computes the gradient error. - -Computes the maximum error for dy/dx between the computed Jacobian and the -numerically estimated Jacobian. - -This function will modify the tensors passed in as it adds more operations -and hence changing the consumers of the operations of the input tensors. - -This function adds operations to the current session. To compute the error -using a particular device, such as a GPU, use the standard methods for -setting a device (e.g. using with sess.graph.device() or setting a device -function in the session constructor). - -##### Args: - - -* `x`: a tensor or list of tensors -* `x_shape`: the dimensions of x as a tuple or an array of ints. If x is a list, - then this is the list of shapes. - -* `y`: a tensor -* `y_shape`: the dimensions of y as a tuple or an array of ints. -* `x_init_value`: (optional) a numpy array of the same shape as "x" - representing the initial value of x. If x is a list, this should be a list - of numpy arrays. If this is none, the function will pick a random tensor - as the initial value. -* `delta`: (optional) the amount of perturbation. -* `init_targets`: list of targets to run to initialize model params. - TODO(mrry): Remove this argument. - -##### Returns: - - The maximum error in between the two Jacobians. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.SessionManager.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.SessionManager.md new file mode 100644 index 0000000000..8bebb8bd29 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.SessionManager.md @@ -0,0 +1,187 @@ +Training helper that restores from checkpoint and creates session. + +This class is a small wrapper that takes care of session creation and +checkpoint recovery. It also provides functions that to facilitate +coordination among multiple training threads or processes. + +* Checkpointing trained variables as the training progresses. +* Initializing variables on startup, restoring them from the most recent + checkpoint after a crash, or wait for checkpoints to become available. + +### Usage: + +```python +with tf.Graph().as_default(): + ...add operations to the graph... + # Create a SessionManager that will checkpoint the model in '/tmp/mydir'. + sm = SessionManager() + sess = sm.prepare_session(master, init_op, saver, checkpoint_dir) + # Use the session to train the graph. + while True: + sess.run() +``` + +`prepare_session()` initializes or restores a model. It requires `init_op` +and `saver` as an argument. + +A second process could wait for the model to be ready by doing the following: + +```python +with tf.Graph().as_default(): + ...add operations to the graph... + # Create a SessionManager that will wait for the model to become ready. + sm = SessionManager() + sess = sm.wait_for_session(master) + # Use the session to train the graph. + while True: + sess.run() +``` + +`wait_for_session()` waits for a model to be initialized by other processes. +- - - + +#### `tf.train.SessionManager.__init__(local_init_op=None, ready_op=None, graph=None, recovery_wait_secs=30)` {#SessionManager.__init__} + +Creates a SessionManager. + +The `local_init_op` is an `Operation` that is run always after a new session +was created. If `None`, this step is skipped. + +The `ready_op` is an `Operation` used to check if the model is ready. The +model is considered ready if that operation returns an empty string tensor. +If the operation returns non empty string tensor, the elements are +concatenated and used to indicate to the user why the model is not ready. + +If `ready_op` is `None`, the model is not checked for readiness. + +`recovery_wait_secs` is the number of seconds between checks that +the model is ready. It is used by processes to wait for a model to +be initialized or restored. Defaults to 30 seconds. + +##### Args: + + +* `local_init_op`: An `Operation` run immediately after session creation. + Usually used to initialize tables and local variables. +* `ready_op`: An `Operation` to check if the model is initialized. +* `graph`: The `Graph` that the model will use. +* `recovery_wait_secs`: Seconds between checks for the model to be ready. + + +- - - + +#### `tf.train.SessionManager.prepare_session(master, init_op=None, saver=None, checkpoint_dir=None, wait_for_checkpoint=False, max_wait_secs=7200, config=None, init_feed_dict=None, init_fn=None)` {#SessionManager.prepare_session} + +Creates a `Session`. Makes sure the model is ready to be used. + +Creates a `Session` on 'master'. If a `saver` object is passed in, and +`checkpoint_dir` points to a directory containing valid checkpoint +files, then it will try to recover the model from checkpoint. If +no checkpoint files are available, and `wait_for_checkpoint` is +`True`, then the process would check every `recovery_wait_secs`, +up to `max_wait_secs`, for recovery to succeed. + +If the model cannot be recovered successfully then it is initialized by +either running the provided `init_op`, or calling the provided `init_fn`. +It is an error if the model cannot be recovered and neither an `init_op` +or an `init_fn` are passed. + +This is a convenient function for the following, with a few error checks +added: + +```python +sess, initialized = self.recover_session(master) +if not initialized: + if init_op: + sess.run(init_op, feed_dict=init_feed_dict) + if init_fn; + init_fn(sess) +return sess +``` + +##### Args: + + +* `master`: `String` representation of the TensorFlow master to use. +* `init_op`: Optional `Operation` used to initialize the model. +* `saver`: A `Saver` object used to restore a model. +* `checkpoint_dir`: Path to the checkpoint files. +* `wait_for_checkpoint`: Whether to wait for checkpoint to become available. +* `max_wait_secs`: Maximum time to wait for checkpoints to become available. +* `config`: Optional `ConfigProto` proto used to configure the session. +* `init_feed_dict`: Optional dictionary that maps `Tensor` objects to feed + values. This feed dictionary is passed to the session `run()` call when + running the init op. +* `init_fn`: Optional callable used to initialize the model. Called after the + optional `init_op` is called. The callable must accept one argument, + the session being initialized. + +##### Returns: + + A `Session` object that can be used to drive the model. + +##### Raises: + + +* `RuntimeError`: If the model cannot be initialized or recovered. + + +- - - + +#### `tf.train.SessionManager.recover_session(master, saver=None, checkpoint_dir=None, wait_for_checkpoint=False, max_wait_secs=7200, config=None)` {#SessionManager.recover_session} + +Creates a `Session`, recovering if possible. + +Creates a new session on 'master'. If the session is not initialized +and can be recovered from a checkpoint, recover it. + +##### Args: + + +* `master`: `String` representation of the TensorFlow master to use. +* `saver`: A `Saver` object used to restore a model. +* `checkpoint_dir`: Path to the checkpoint files. +* `wait_for_checkpoint`: Whether to wait for checkpoint to become available. +* `max_wait_secs`: Maximum time to wait for checkpoints to become available. +* `config`: Optional `ConfigProto` proto used to configure the session. + +##### Returns: + + A pair (sess, initialized) where 'initialized' is `True` if + the session could be recovered, `False` otherwise. + + +- - - + +#### `tf.train.SessionManager.wait_for_session(master, config=None, max_wait_secs=inf)` {#SessionManager.wait_for_session} + +Creates a new `Session` and waits for model to be ready. + +Creates a new `Session` on 'master'. Waits for the model to be +initialized or recovered from a checkpoint. It's expected that +another thread or process will make the model ready, and that this +is intended to be used by threads/processes that participate in a +distributed training configuration where a different thread/process +is responsible for initializing or recovering the model being trained. + +NB: The amount of time this method waits for the session is bounded +by max_wait_secs. By default, this function will wait indefinitely. + +##### Args: + + +* `master`: `String` representation of the TensorFlow master to use. +* `config`: Optional ConfigProto proto used to configure the session. +* `max_wait_secs`: Maximum time to wait for the session to become available. + +##### Returns: + + A `Session`. May be None if the operation exceeds the timeout + specified by config.operation_timeout_in_ms. + +##### Raises: + + tf.DeadlineExceededError: if the session is not available after + max_wait_secs. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.add_queue_runner.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.add_queue_runner.md deleted file mode 100644 index f5b9549ad8..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.add_queue_runner.md +++ /dev/null @@ -1,18 +0,0 @@ -### `tf.train.add_queue_runner(qr, collection='queue_runners')` {#add_queue_runner} - -Adds a `QueueRunner` to a collection in the graph. - -When building a complex model that uses many queues it is often difficult to -gather all the queue runners that need to be run. This convenience function -allows you to add a queue runner to a well known collection in the graph. - -The companion method `start_queue_runners()` can be used to start threads for -all the collected queue runners. - -##### Args: - - -* `qr`: A `QueueRunner`. -* `collection`: A `GraphKey` specifying the graph collection to add - the queue runner to. Defaults to `GraphKeys.QUEUE_RUNNERS`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.batch_join.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.batch_join.md new file mode 100644 index 0000000000..f6985b0a44 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.batch_join.md @@ -0,0 +1,79 @@ +### `tf.train.batch_join(tensors_list, batch_size, capacity=32, enqueue_many=False, shapes=None, dynamic_pad=False, shared_name=None, name=None)` {#batch_join} + +Runs a list of tensors to fill a queue to create batches of examples. + +The `tensors_list` argument is a list of tuples of tensors, or a list of +dictionaries of tensors. Each element in the list is treated similarily +to the `tensors` argument of `tf.train.batch()`. + +Enqueues a different list of tensors in different threads. +Implemented using a queue -- a `QueueRunner` for the queue +is added to the current `Graph`'s `QUEUE_RUNNER` collection. + +`len(tensors_list)` threads will be started, +with thread `i` enqueuing the tensors from +`tensors_list[i]`. `tensors_list[i1][j]` must match +`tensors_list[i2][j]` in type and shape, except in the first +dimension if `enqueue_many` is true. + +If `enqueue_many` is `False`, each `tensors_list[i]` is assumed +to represent a single example. An input tensor `x` will be output as a +tensor with shape `[batch_size] + x.shape`. + +If `enqueue_many` is `True`, `tensors_list[i]` is assumed to +represent a batch of examples, where the first dimension is indexed +by example, and all members of `tensors_list[i]` should have the +same size in the first dimension. The slices of any input tensor +`x` are treated as examples, and the output tensors will have shape +`[batch_size] + x.shape[1:]`. + +The `capacity` argument controls the how long the prefetching is allowed to +grow the queues. + +The returned operation is a dequeue operation and will throw +`tf.errors.OutOfRangeError` if the input queue is exhausted. If this +operation is feeding another input queue, its queue runner will catch +this exception, however, if this operation is used in your main thread +you are responsible for catching this yourself. + +*N.B.:* If `dynamic_pad` is `False`, you must ensure that either +(i) the `shapes` argument is passed, or (ii) all of the tensors in +`tensors_list` must have fully-defined shapes. `ValueError` will be +raised if neither of these conditions holds. + +If `dynamic_pad` is `True`, it is sufficient that the *rank* of the +tensors is known, but individual dimensions may have value `None`. +In this case, for each enqueue the dimensions with value `None` +may have a variable length; upon dequeue, the output tensors will be padded +on the right to the maximum shape of the tensors in the current minibatch. +For numbers, this padding takes value 0. For strings, this padding is +the empty string. See `PaddingFIFOQueue` for more info. + +##### Args: + + +* `tensors_list`: A list of tuples or dictionaries of tensors to enqueue. +* `batch_size`: An integer. The new batch size pulled from the queue. +* `capacity`: An integer. The maximum number of elements in the queue. +* `enqueue_many`: Whether each tensor in `tensor_list_list` is a single + example. +* `shapes`: (Optional) The shapes for each example. Defaults to the + inferred shapes for `tensor_list_list[i]`. +* `dynamic_pad`: Boolean. Allow variable dimensions in input shapes. + The given dimensions are padded upon dequeue so that tensors within a + batch have the same shapes. +* `shared_name`: (Optional) If set, this queue will be shared under the given + name across multiple sessions. +* `name`: (Optional) A name for the operations. + +##### Returns: + + A list or dictionary of tensors with the same number and types as + `tensors_list[i]`. + +##### Raises: + + +* `ValueError`: If the `shapes` are not specified, and cannot be + inferred from the elements of `tensor_list_list`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.import_meta_graph.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.import_meta_graph.md new file mode 100644 index 0000000000..21b3465076 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.import_meta_graph.md @@ -0,0 +1,65 @@ +### `tf.train.import_meta_graph(meta_graph_or_file)` {#import_meta_graph} + +Recreates a Graph saved in a `MetaGraphDef` proto. + +This function takes a `MetaGraphDef` protocol buffer as input. If +the argument is a file containing a `MetaGraphDef` protocol buffer , +it constructs a protocol buffer from the file content. The function +then adds all the nodes from the `graph_def` field to the +current graph, recreates all the collections, and returns a saver +constructed from the `saver_def` field. + +In combination with `export_meta_graph()`, this function can be used to + +* Serialize a graph along with other Python objects such as `QueueRunner`, + `Variable` into a `MetaGraphDef`. + +* Restart training from a saved graph and checkpoints. + +* Run inference from a saved graph and checkpoints. + +```Python +... +# Create a saver. +saver = tf.train.Saver(...variables...) +# Remember the training_op we want to run by adding it to a collection. +tf.add_to_collection('train_op', train_op) +sess = tf.Session() +for step in xrange(1000000): + sess.run(train_op) + if step % 1000 == 0: + # Saves checkpoint, which by default also exports a meta_graph + # named 'my-model-global_step.meta'. + saver.save(sess, 'my-model', global_step=step) +``` + +Later we can continue training from this saved `meta_graph` without building +the model from scratch. + +```Python +with tf.Session() as sess: + new_saver = tf.train.import_meta_graph('my-save-dir/my-model-10000.meta') + new_saver.restore(sess, 'my-save-dir/my-model-10000') + # tf.get_collection() returns a list. In this example we only want the + # first one. + train_op = tf.get_collection('train_op')[0] + for step in xrange(1000000): + sess.run(train_op) +``` + +NOTE: Restarting training from saved `meta_graph` only works if the +device assignments have not changed. + +##### Args: + + +* `meta_graph_or_file`: `MetaGraphDef` protocol buffer or filename (including + the path) containing a `MetaGraphDef`. + +##### Returns: + + A saver constructed from `saver_def` in `MetaGraphDef` or None. + + A None value is returned if no variables exist in the `MetaGraphDef` + (i.e., there are no variables to restore). + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.shuffle_batch_join.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.shuffle_batch_join.md new file mode 100644 index 0000000000..ab9e1be4a3 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.shuffle_batch_join.md @@ -0,0 +1,68 @@ +### `tf.train.shuffle_batch_join(tensors_list, batch_size, capacity, min_after_dequeue, seed=None, enqueue_many=False, shapes=None, shared_name=None, name=None)` {#shuffle_batch_join} + +Create batches by randomly shuffling tensors. + +The `tensors_list` argument is a list of tuples of tensors, or a list of +dictionaries of tensors. Each element in the list is treated similarily +to the `tensors` argument of `tf.train.shuffle_batch()`. + +This version enqueues a different list of tensors in different threads. +It adds the following to the current `Graph`: + +* A shuffling queue into which tensors from `tensors_list` are enqueued. +* A `dequeue_many` operation to create batches from the queue. +* A `QueueRunner` to `QUEUE_RUNNER` collection, to enqueue the tensors + from `tensors_list`. + +`len(tensors_list)` threads will be started, with thread `i` enqueuing +the tensors from `tensors_list[i]`. `tensors_list[i1][j]` must match +`tensors_list[i2][j]` in type and shape, except in the first dimension if +`enqueue_many` is true. + +If `enqueue_many` is `False`, each `tensors_list[i]` is assumed +to represent a single example. An input tensor with shape `[x, y, z]` +will be output as a tensor with shape `[batch_size, x, y, z]`. + +If `enqueue_many` is `True`, `tensors_list[i]` is assumed to +represent a batch of examples, where the first dimension is indexed +by example, and all members of `tensors_list[i]` should have the +same size in the first dimension. If an input tensor has shape `[*, x, +y, z]`, the output will have shape `[batch_size, x, y, z]`. + +The `capacity` argument controls the how long the prefetching is allowed to +grow the queues. + +The returned operation is a dequeue operation and will throw +`tf.errors.OutOfRangeError` if the input queue is exhausted. If this +operation is feeding another input queue, its queue runner will catch +this exception, however, if this operation is used in your main thread +you are responsible for catching this yourself. + +##### Args: + + +* `tensors_list`: A list of tuples or dictionaries of tensors to enqueue. +* `batch_size`: An integer. The new batch size pulled from the queue. +* `capacity`: An integer. The maximum number of elements in the queue. +* `min_after_dequeue`: Minimum number elements in the queue after a + dequeue, used to ensure a level of mixing of elements. +* `seed`: Seed for the random shuffling within the queue. +* `enqueue_many`: Whether each tensor in `tensor_list_list` is a single + example. +* `shapes`: (Optional) The shapes for each example. Defaults to the + inferred shapes for `tensors_list[i]`. +* `shared_name`: (optional). If set, this queue will be shared under the given + name across multiple sessions. +* `name`: (Optional) A name for the operations. + +##### Returns: + + A list or dictionary of tensors with the same number and types as + `tensors_list[i]`. + +##### Raises: + + +* `ValueError`: If the `shapes` are not specified, and cannot be + inferred from the elements of `tensors_list`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.truediv.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.truediv.md deleted file mode 100644 index 0ccb1b2217..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.truediv.md +++ /dev/null @@ -1,31 +0,0 @@ -### `tf.truediv(x, y, name=None)` {#truediv} - -Divides x / y elementwise, always producing floating point results. - -The same as `tf.div` for floating point arguments, but casts integer arguments -to floating point before dividing so that the result is always floating point. -This op is generated by normal `x / y` division in Python 3 and in Python 2.7 -with `from __future__ import division`. If you want integer division that -rounds down, use `x // y` or `tf.floordiv`. - -`x` and `y` must have the same numeric type. If the inputs are floating -point, the output will have the same type. If the inputs are integral, the -inputs are cast to `float32` for `int8` and `int16` and `float64` for `int32` -and `int64` (matching the behavior of Numpy). - -##### Args: - - -* `x`: `Tensor` numerator of numeric type. -* `y`: `Tensor` denominator of numeric type. -* `name`: A name for the operation (optional). - -##### Returns: - - `x / y` evaluated in floating point. - -##### Raises: - - -* `TypeError`: If `x` and `y` have different dtypes. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.truncated_normal_initializer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.truncated_normal_initializer.md new file mode 100644 index 0000000000..0a335333e2 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.truncated_normal_initializer.md @@ -0,0 +1,31 @@ +### `tf.truncated_normal_initializer(mean=0.0, stddev=1.0, seed=None, dtype=tf.float32)` {#truncated_normal_initializer} + +Returns an initializer that generates a truncated normal distribution. + +These values are similar to values from a `random_normal_initializer` +except that values more than two standard deviations from the mean +are discarded and re-drawn. This is the recommended initializer for +neural network weights and filters. + +##### Args: + + +* `mean`: a python scalar or a scalar tensor. Mean of the random values + to generate. +* `stddev`: a python scalar or a scalar tensor. Standard deviation of the + random values to generate. +* `seed`: A Python integer. Used to create random seeds. See + [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) + for behavior. +* `dtype`: The data type. Only floating point types are supported. + +##### Returns: + + An initializer that generates tensors with a truncated normal + distribution. + +##### Raises: + + +* `ValueError`: if `dtype` is not a floating point type. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.unique.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.unique.md new file mode 100644 index 0000000000..0929f57b0f --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.unique.md @@ -0,0 +1,33 @@ +### `tf.unique(x, name=None)` {#unique} + +Finds unique elements in a 1-D tensor. + +This operation returns a tensor `y` containing all of the unique elements of `x` +sorted in the same order that they occur in `x`. This operation also returns a +tensor `idx` the same size as `x` that contains the index of each value of `x` +in the unique output `y`. In other words: + +`y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]` + +For example: + +```prettyprint +# tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8] +y, idx = unique(x) +y ==> [1, 2, 4, 7, 8] +idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4] +``` + +##### Args: + + +* `x`: A `Tensor`. 1-D. +* `name`: A name for the operation (optional). + +##### Returns: + + A tuple of `Tensor` objects (y, idx). + +* `y`: A `Tensor`. Has the same type as `x`. 1-D. +* `idx`: A `Tensor` of type `int32`. 1-D. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.unpack.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.unpack.md deleted file mode 100644 index cc4884c720..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.unpack.md +++ /dev/null @@ -1,32 +0,0 @@ -### `tf.unpack(value, num=None, name='unpack')` {#unpack} - -Unpacks the outer dimension of a rank-`R` tensor into rank-`(R-1)` tensors. - -Unpacks `num` tensors from `value` along the first dimension. -If `num` is not specified (the default), it is inferred from `value`'s shape. -If `value.shape[0]` is not known, `ValueError` is raised. - -The ith tensor in `output` is the slice `value[i, ...]`. Each tensor in -`output` has shape `value.shape[1:]`. - -This is the opposite of pack. The numpy equivalent is - - tf.unpack(x, n) = list(x) - -##### Args: - - -* `value`: A rank `R > 0` `Tensor` to be unpacked. -* `num`: An `int`. The first dimension of value. Automatically inferred if - `None` (the default). -* `name`: A name for the operation (optional). - -##### Returns: - - The list of `Tensor` objects unpacked from `value`. - -##### Raises: - - -* `ValueError`: If `num` is unspecified and cannot be inferred. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.variable_op_scope.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.variable_op_scope.md deleted file mode 100644 index e3ab6e5d2e..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.variable_op_scope.md +++ /dev/null @@ -1,56 +0,0 @@ -### `tf.variable_op_scope(values, name_or_scope, default_name=None, initializer=None, regularizer=None, caching_device=None, partitioner=None, reuse=None)` {#variable_op_scope} - -Returns a context manager for defining an op that creates variables. - -This context manager validates that the given `values` are from the -same graph, ensures that graph is the default graph, and pushes a -name scope and a variable scope. - -If `name_or_scope` is not None, it is used as is in the variable scope. If -`scope` is None, then `default_name` is used. In that case, if the same name -has been previously used in the same scope, it will made unique be appending -`_N` to it. - -This is intended to be used when defining generic ops and so reuse is always -inherited. - -For example, to define a new Python op called `my_op_with_vars`: - -```python -def my_op_with_vars(a, b, scope=None): - with tf.variable_op_scope([a, b], scope, "MyOp") as scope: - a = tf.convert_to_tensor(a, name="a") - b = tf.convert_to_tensor(b, name="b") - c = tf.get_variable('c') - # Define some computation that uses `a`, `b`, and `c`. - return foo_op(..., name=scope) -``` - -##### Args: - - -* `values`: The list of `Tensor` arguments that are passed to the op function. -* `name_or_scope`: The name argument that is passed to the op function, - this name_or_scope is not uniquified in the variable scope. -* `default_name`: The default name to use if the `name_or_scope` argument is - `None`, this name will be uniquified. If name_or_scope is provided it - won't be used and therefore it is not required and can be None. -* `initializer`: The default initializer to pass to variable scope. -* `regularizer`: The default regularizer for variables within this scope. -* `caching_device`: The default caching device for variables within this scope. -* `partitioner`: The default partitioner for variables within this scope. -* `reuse`: `True` or `None`; if `True`, we go into reuse mode for this scope as - well as all sub-scopes; if `None`, we just inherit the parent scope reuse. - - -##### Returns: - - A context manager for use in defining a Python op. - -##### Raises: - - -* `ValueError`: when trying to reuse within a create scope, or create within - a reuse scope, or if reuse is not `None` or `True`. -* `TypeError`: when the types of some arguments are not appropriate. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.variable_scope.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.variable_scope.md new file mode 100644 index 0000000000..86d4684b72 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.variable_scope.md @@ -0,0 +1,82 @@ +### `tf.variable_scope(name_or_scope, reuse=None, initializer=None, regularizer=None, caching_device=None, partitioner=None)` {#variable_scope} + +Returns a context for variable scope. + +Variable scope allows to create new variables and to share already created +ones while providing checks to not create or share by accident. For details, +see the [Variable Scope How To](../../how_tos/variable_scope/index.md), +here we present only a few basic examples. + +Simple example of how to create a new variable: + +```python +with tf.variable_scope("foo"): + with tf.variable_scope("bar"): + v = tf.get_variable("v", [1]) + assert v.name == "foo/bar/v:0" +``` + +Basic example of sharing a variable: + +```python +with tf.variable_scope("foo"): + v = tf.get_variable("v", [1]) +with tf.variable_scope("foo", reuse=True): + v1 = tf.get_variable("v", [1]) +assert v1 == v +``` + +Sharing a variable by capturing a scope and setting reuse: + +```python +with tf.variable_scope("foo") as scope: + v = tf.get_variable("v", [1]) + scope.reuse_variables() + v1 = tf.get_variable("v", [1]) +assert v1 == v +``` + +To prevent accidental sharing of variables, we raise an exception when +getting an existing variable in a non-reusing scope. + +```python +with tf.variable_scope("foo"): + v = tf.get_variable("v", [1]) + v1 = tf.get_variable("v", [1]) + # Raises ValueError("... v already exists ..."). +``` + +Similarly, we raise an exception when trying to get a variable that +does not exist in reuse mode. + +```python +with tf.variable_scope("foo", reuse=True): + v = tf.get_variable("v", [1]) + # Raises ValueError("... v does not exists ..."). +``` + +Note that the `reuse` flag is inherited: if we open a reusing scope, +then all its sub-scopes become reusing as well. + +##### Args: + + +* `name_or_scope`: `string` or `VariableScope`: the scope to open. +* `reuse`: `True` or `None`; if `True`, we go into reuse mode for this scope as + well as all sub-scopes; if `None`, we just inherit the parent scope reuse. +* `initializer`: default initializer for variables within this scope. +* `regularizer`: default regularizer for variables within this scope. +* `caching_device`: default caching device for variables within this scope. +* `partitioner`: default partitioner for variables within this scope. + +##### Returns: + + A scope that can be to captured and reused. + +##### Raises: + + +* `ValueError`: when trying to reuse within a create scope, or create within + a reuse scope, or if reuse is not `None` or `True`. +* `TypeError`: when the types of some arguments are not appropriate. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.zeros_like.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.zeros_like.md new file mode 100644 index 0000000000..9017e14287 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.zeros_like.md @@ -0,0 +1,28 @@ +### `tf.zeros_like(tensor, dtype=None, name=None)` {#zeros_like} + +Creates a tensor with all elements set to zero. + +Given a single tensor (`tensor`), this operation returns a tensor of the +same type and shape as `tensor` with all elements set to zero. Optionally, +you can use `dtype` to specify a new type for the returned tensor. + +For example: + +```python +# 'tensor' is [[1, 2, 3], [4, 5, 6]] +tf.zeros_like(tensor) ==> [[0, 0, 0], [0, 0, 0]] +``` + +##### Args: + + +* `tensor`: A `Tensor`. +* `dtype`: A type for the returned `Tensor`. Must be `float32`, `float64`, + `int8`, `int16`, `int32`, `int64`, `uint8`, or `complex64`. + +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` with all elements set to zero. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.zeta.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.zeta.md deleted file mode 100644 index ed66237d38..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.zeta.md +++ /dev/null @@ -1,21 +0,0 @@ -### `tf.zeta(x, q, name=None)` {#zeta} - -Compute the Hurwitz zeta function \\(\zeta(x, q)\\). - -The Hurwitz zeta function is defined as: - -``` -\zeta(x, q) = \sum_{n=0}^{\infty} (q + n)^{-x} -``` - -##### Args: - - -* `x`: A `Tensor`. Must be one of the following types: `float32`, `float64`. -* `q`: A `Tensor`. Must have the same type as `x`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `x`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.Operation.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.Operation.md new file mode 100644 index 0000000000..a9e21fb29e --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.Operation.md @@ -0,0 +1,225 @@ +Represents a graph node that performs computation on tensors. + +An `Operation` is a node in a TensorFlow `Graph` that takes zero or +more `Tensor` objects as input, and produces zero or more `Tensor` +objects as output. Objects of type `Operation` are created by +calling a Python op constructor (such as +[`tf.matmul()`](../../api_docs/python/math_ops.md#matmul)) +or [`Graph.create_op()`](../../api_docs/python/framework.md#Graph.create_op). + +For example `c = tf.matmul(a, b)` creates an `Operation` of type +"MatMul" that takes tensors `a` and `b` as input, and produces `c` +as output. + +After the graph has been launched in a session, an `Operation` can +be executed by passing it to +[`Session.run()`](../../api_docs/python/client.md#Session.run). +`op.run()` is a shortcut for calling `tf.get_default_session().run(op)`. + +- - - + +#### `tf.Operation.name` {#Operation.name} + +The full name of this operation. + + +- - - + +#### `tf.Operation.type` {#Operation.type} + +The type of the op (e.g. `"MatMul"`). + + +- - - + +#### `tf.Operation.inputs` {#Operation.inputs} + +The list of `Tensor` objects representing the data inputs of this op. + + +- - - + +#### `tf.Operation.control_inputs` {#Operation.control_inputs} + +The `Operation` objects on which this op has a control dependency. + +Before this op is executed, TensorFlow will ensure that the +operations in `self.control_inputs` have finished executing. This +mechanism can be used to run ops sequentially for performance +reasons, or to ensure that the side effects of an op are observed +in the correct order. + +##### Returns: + + A list of `Operation` objects. + + +- - - + +#### `tf.Operation.outputs` {#Operation.outputs} + +The list of `Tensor` objects representing the outputs of this op. + + +- - - + +#### `tf.Operation.device` {#Operation.device} + +The name of the device to which this op has been assigned, if any. + +##### Returns: + + The string name of the device to which this op has been + assigned, or an empty string if it has not been assigned to a + device. + + +- - - + +#### `tf.Operation.graph` {#Operation.graph} + +The `Graph` that contains this operation. + + + +- - - + +#### `tf.Operation.run(feed_dict=None, session=None)` {#Operation.run} + +Runs this operation in a `Session`. + +Calling this method will execute all preceding operations that +produce the inputs needed for this operation. + +*N.B.* Before invoking `Operation.run()`, its graph must have been +launched in a session, and either a default session must be +available, or `session` must be specified explicitly. + +##### Args: + + +* `feed_dict`: A dictionary that maps `Tensor` objects to feed values. + See [`Session.run()`](../../api_docs/python/client.md#Session.run) + for a description of the valid feed values. +* `session`: (Optional.) The `Session` to be used to run to this operation. If + none, the default session will be used. + + + +- - - + +#### `tf.Operation.get_attr(name)` {#Operation.get_attr} + +Returns the value of the attr of this op with the given `name`. + +##### Args: + + +* `name`: The name of the attr to fetch. + +##### Returns: + + The value of the attr, as a Python object. + +##### Raises: + + +* `ValueError`: If this op does not have an attr with the given `name`. + + +- - - + +#### `tf.Operation.traceback` {#Operation.traceback} + +Returns the call stack from when this operation was constructed. + + + +#### Other Methods +- - - + +#### `tf.Operation.__init__(node_def, g, inputs=None, output_types=None, control_inputs=None, input_types=None, original_op=None, op_def=None)` {#Operation.__init__} + +Creates an `Operation`. + +NOTE: This constructor validates the name of the `Operation` (passed +as `node_def.name`). Valid `Operation` names match the following +regular expression: + + [A-Za-z0-9.][A-Za-z0-9_.\-/]* + +##### Args: + + +* `node_def`: `graph_pb2.NodeDef`. `NodeDef` for the `Operation`. + Used for attributes of `graph_pb2.NodeDef`, typically `name`, + `op`, and `device`. The `input` attribute is irrelevant here + as it will be computed when generating the model. +* `g`: `Graph`. The parent graph. +* `inputs`: list of `Tensor` objects. The inputs to this `Operation`. +* `output_types`: list of `DType` objects. List of the types of the + `Tensors` computed by this operation. The length of this list indicates + the number of output endpoints of the `Operation`. +* `control_inputs`: list of operations or tensors from which to have a + control dependency. +* `input_types`: List of `DType` objects representing the + types of the tensors accepted by the `Operation`. By default + uses `[x.dtype.base_dtype for x in inputs]`. Operations that expect + reference-typed inputs must specify these explicitly. +* `original_op`: Optional. Used to associate the new `Operation` with an + existing `Operation` (for example, a replica with the op that was + replicated). +* `op_def`: Optional. The `op_def_pb2.OpDef` proto that describes the + op type that this `Operation` represents. + +##### Raises: + + +* `TypeError`: if control inputs are not Operations or Tensors, + or if `node_def` is not a `NodeDef`, + or if `g` is not a `Graph`, + or if `inputs` are not tensors, + or if `inputs` and `input_types` are incompatible. +* `ValueError`: if the `node_def` name is not valid. + + +- - - + +#### `tf.Operation.colocation_groups()` {#Operation.colocation_groups} + +Returns the list of colocation groups of the op. + + +- - - + +#### `tf.Operation.node_def` {#Operation.node_def} + +Returns a serialized `NodeDef` representation of this operation. + +##### Returns: + + A + [`NodeDef`](https://www.tensorflow.org/code/tensorflow/core/framework/graph.proto) + protocol buffer. + + +- - - + +#### `tf.Operation.op_def` {#Operation.op_def} + +Returns the `OpDef` proto that represents the type of this op. + +##### Returns: + + An + [`OpDef`](https://www.tensorflow.org/code/tensorflow/core/framework/op_def.proto) + protocol buffer. + + +- - - + +#### `tf.Operation.values()` {#Operation.values} + +DEPRECATED: Use outputs. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.QueueBase.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.QueueBase.md deleted file mode 100644 index 35373f6edc..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.QueueBase.md +++ /dev/null @@ -1,268 +0,0 @@ -Base class for queue implementations. - -A queue is a TensorFlow data structure that stores tensors across -multiple steps, and exposes operations that enqueue and dequeue -tensors. - -Each queue element is a tuple of one or more tensors, where each -tuple component has a static dtype, and may have a static shape. The -queue implementations support versions of enqueue and dequeue that -handle single elements, versions that support enqueuing and -dequeuing a batch of elements at once. - -See [`tf.FIFOQueue`](#FIFOQueue) and -[`tf.RandomShuffleQueue`](#RandomShuffleQueue) for concrete -implementations of this class, and instructions on how to create -them. - -- - - - -#### `tf.QueueBase.enqueue(vals, name=None)` {#QueueBase.enqueue} - -Enqueues one element to this queue. - -If the queue is full when this operation executes, it will block -until the element has been enqueued. - -##### Args: - - -* `vals`: A tensor, a list or tuple of tensors, or a dictionary containing - the values to enqueue. -* `name`: A name for the operation (optional). - -##### Returns: - - The operation that enqueues a new tuple of tensors to the queue. - - -- - - - -#### `tf.QueueBase.enqueue_many(vals, name=None)` {#QueueBase.enqueue_many} - -Enqueues zero or more elements to this queue. - -This operation slices each component tensor along the 0th dimension to -make multiple queue elements. All of the tensors in `vals` must have the -same size in the 0th dimension. - -If the queue is full when this operation executes, it will block -until all of the elements have been enqueued. - -##### Args: - - -* `vals`: A tensor, a list or tuple of tensors, or a dictionary - from which the queue elements are taken. -* `name`: A name for the operation (optional). - -##### Returns: - - The operation that enqueues a batch of tuples of tensors to the queue. - - - -- - - - -#### `tf.QueueBase.dequeue(name=None)` {#QueueBase.dequeue} - -Dequeues one element from this queue. - -If the queue is empty when this operation executes, it will block -until there is an element to dequeue. - -##### Args: - - -* `name`: A name for the operation (optional). - -##### Returns: - - The tuple of tensors that was dequeued. - - -- - - - -#### `tf.QueueBase.dequeue_many(n, name=None)` {#QueueBase.dequeue_many} - -Dequeues and concatenates `n` elements from this queue. - -This operation concatenates queue-element component tensors along -the 0th dimension to make a single component tensor. All of the -components in the dequeued tuple will have size `n` in the 0th dimension. - -If the queue is closed and there are less than `n` elements left, then an -`OutOfRange` exception is raised. - -##### Args: - - -* `n`: A scalar `Tensor` containing the number of elements to dequeue. -* `name`: A name for the operation (optional). - -##### Returns: - - The tuple of concatenated tensors that was dequeued. - - - -- - - - -#### `tf.QueueBase.size(name=None)` {#QueueBase.size} - -Compute the number of elements in this queue. - -##### Args: - - -* `name`: A name for the operation (optional). - -##### Returns: - - A scalar tensor containing the number of elements in this queue. - - - -- - - - -#### `tf.QueueBase.close(cancel_pending_enqueues=False, name=None)` {#QueueBase.close} - -Closes this queue. - -This operation signals that no more elements will be enqueued in -the given queue. Subsequent `enqueue` and `enqueue_many` -operations will fail. Subsequent `dequeue` and `dequeue_many` -operations will continue to succeed if sufficient elements remain -in the queue. Subsequent `dequeue` and `dequeue_many` operations -that would block will fail immediately. - -If `cancel_pending_enqueues` is `True`, all pending requests will also -be cancelled. - -##### Args: - - -* `cancel_pending_enqueues`: (Optional.) A boolean, defaulting to - `False` (described above). -* `name`: A name for the operation (optional). - -##### Returns: - - The operation that closes the queue. - - - -#### Other Methods -- - - - -#### `tf.QueueBase.__init__(dtypes, shapes, names, queue_ref)` {#QueueBase.__init__} - -Constructs a queue object from a queue reference. - -The two optional lists, `shapes` and `names`, must be of the same length -as `dtypes` if provided. The values at a given index `i` indicate the -shape and name to use for the corresponding queue component in `dtypes`. - -##### Args: - - -* `dtypes`: A list of types. The length of dtypes must equal the number - of tensors in each element. -* `shapes`: Constraints on the shapes of tensors in an element: - A list of shape tuples or None. This list is the same length - as dtypes. If the shape of any tensors in the element are constrained, - all must be; shapes can be None if the shapes should not be constrained. -* `names`: Optional list of names. If provided, the `enqueue()` and - `dequeue()` methods will use dictionaries with these names as keys. - Must be None or a list or tuple of the same length as `dtypes`. -* `queue_ref`: The queue reference, i.e. the output of the queue op. - -##### Raises: - - -* `ValueError`: If one of the arguments is invalid. - - -- - - - -#### `tf.QueueBase.dequeue_up_to(n, name=None)` {#QueueBase.dequeue_up_to} - -Dequeues and concatenates `n` elements from this queue. - -**Note** This operation is not supported by all queues. If a queue does not -support DequeueUpTo, then an Unimplemented exception is raised. - -This operation concatenates queue-element component tensors along the -0th dimension to make a single component tensor. All of the components -in the dequeued tuple will have size `n` in the 0th dimension. - -If the queue is closed and there are more than `0` but less than `n` -elements remaining, then instead of raising an `OutOfRange` exception like -`dequeue_many`, the remaining elements are returned immediately. -If the queue is closed and there are `0` elements left in the queue, then -an `OutOfRange` exception is raised just like in `dequeue_many`. -Otherwise the behavior is identical to `dequeue_many`: - -##### Args: - - -* `n`: A scalar `Tensor` containing the number of elements to dequeue. -* `name`: A name for the operation (optional). - -##### Returns: - - The tuple of concatenated tensors that was dequeued. - - -- - - - -#### `tf.QueueBase.dtypes` {#QueueBase.dtypes} - -The list of dtypes for each component of a queue element. - - -- - - - -#### `tf.QueueBase.from_list(index, queues)` {#QueueBase.from_list} - -Create a queue using the queue reference from `queues[index]`. - -##### Args: - - -* `index`: An integer scalar tensor that determines the input that gets - selected. -* `queues`: A list of `QueueBase` objects. - -##### Returns: - - A `QueueBase` object. - -##### Raises: - - -* `TypeError`: When `queues` is not a list of `QueueBase` objects, - or when the data types of `queues` are not all the same. - - -- - - - -#### `tf.QueueBase.name` {#QueueBase.name} - -The name of the underlying queue. - - -- - - - -#### `tf.QueueBase.names` {#QueueBase.names} - -The list of names for each component of a queue element. - - -- - - - -#### `tf.QueueBase.queue_ref` {#QueueBase.queue_ref} - -The underlying queue reference. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.ReaderBase.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.ReaderBase.md new file mode 100644 index 0000000000..bc9f62de5a --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.ReaderBase.md @@ -0,0 +1,156 @@ +Base class for different Reader types, that produce a record every step. + +Conceptually, Readers convert string 'work units' into records (key, +value pairs). Typically the 'work units' are filenames and the +records are extracted from the contents of those files. We want a +single record produced per step, but a work unit can correspond to +many records. + +Therefore we introduce some decoupling using a queue. The queue +contains the work units and the Reader dequeues from the queue when +it is asked to produce a record (via Read()) but it has finished the +last work unit. +- - - + +#### `tf.ReaderBase.__init__(reader_ref, supports_serialize=False)` {#ReaderBase.__init__} + +Creates a new ReaderBase. + +##### Args: + + +* `reader_ref`: The operation that implements the reader. +* `supports_serialize`: True if the reader implementation can + serialize its state. + + +- - - + +#### `tf.ReaderBase.num_records_produced(name=None)` {#ReaderBase.num_records_produced} + +Returns the number of records this reader has produced. + +This is the same as the number of Read executions that have +succeeded. + +##### Args: + + +* `name`: A name for the operation (optional). + +##### Returns: + + An int64 Tensor. + + +- - - + +#### `tf.ReaderBase.num_work_units_completed(name=None)` {#ReaderBase.num_work_units_completed} + +Returns the number of work units this reader has finished processing. + +##### Args: + + +* `name`: A name for the operation (optional). + +##### Returns: + + An int64 Tensor. + + +- - - + +#### `tf.ReaderBase.read(queue, name=None)` {#ReaderBase.read} + +Returns the next record (key, value pair) produced by a reader. + +Will dequeue a work unit from queue if necessary (e.g. when the +Reader needs to start reading from a new file since it has +finished with the previous file). + +##### Args: + + +* `queue`: A Queue or a mutable string Tensor representing a handle + to a Queue, with string work items. +* `name`: A name for the operation (optional). + +##### Returns: + + A tuple of Tensors (key, value). + +* `key`: A string scalar Tensor. +* `value`: A string scalar Tensor. + + +- - - + +#### `tf.ReaderBase.reader_ref` {#ReaderBase.reader_ref} + +Op that implements the reader. + + +- - - + +#### `tf.ReaderBase.reset(name=None)` {#ReaderBase.reset} + +Restore a reader to its initial clean state. + +##### Args: + + +* `name`: A name for the operation (optional). + +##### Returns: + + The created Operation. + + +- - - + +#### `tf.ReaderBase.restore_state(state, name=None)` {#ReaderBase.restore_state} + +Restore a reader to a previously saved state. + +Not all Readers support being restored, so this can produce an +Unimplemented error. + +##### Args: + + +* `state`: A string Tensor. + Result of a SerializeState of a Reader with matching type. +* `name`: A name for the operation (optional). + +##### Returns: + + The created Operation. + + +- - - + +#### `tf.ReaderBase.serialize_state(name=None)` {#ReaderBase.serialize_state} + +Produce a string tensor that encodes the state of a reader. + +Not all Readers support being serialized, so this can produce an +Unimplemented error. + +##### Args: + + +* `name`: A name for the operation (optional). + +##### Returns: + + A string Tensor. + + +- - - + +#### `tf.ReaderBase.supports_serialize` {#ReaderBase.supports_serialize} + +Whether the Reader implementation can serialize its state. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.Tensor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.Tensor.md deleted file mode 100644 index 73af134a7a..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.Tensor.md +++ /dev/null @@ -1,228 +0,0 @@ -Represents a value produced by an `Operation`. - -A `Tensor` is a symbolic handle to one of the outputs of an -`Operation`. It does not hold the values of that operation's output, -but instead provides a means of computing those values in a -TensorFlow [`Session`](../../api_docs/python/client.md#Session). - -This class has two primary purposes: - -1. A `Tensor` can be passed as an input to another `Operation`. - This builds a dataflow connection between operations, which - enables TensorFlow to execute an entire `Graph` that represents a - large, multi-step computation. - -2. After the graph has been launched in a session, the value of the - `Tensor` can be computed by passing it to - [`Session.run()`](../../api_docs/python/client.md#Session.run). - `t.eval()` is a shortcut for calling - `tf.get_default_session().run(t)`. - -In the following example, `c`, `d`, and `e` are symbolic `Tensor` -objects, whereas `result` is a numpy array that stores a concrete -value: - -```python -# Build a dataflow graph. -c = tf.constant([[1.0, 2.0], [3.0, 4.0]]) -d = tf.constant([[1.0, 1.0], [0.0, 1.0]]) -e = tf.matmul(c, d) - -# Construct a `Session` to execute the graph. -sess = tf.Session() - -# Execute the graph and store the value that `e` represents in `result`. -result = sess.run(e) -``` - -- - - - -#### `tf.Tensor.dtype` {#Tensor.dtype} - -The `DType` of elements in this tensor. - - -- - - - -#### `tf.Tensor.name` {#Tensor.name} - -The string name of this tensor. - - -- - - - -#### `tf.Tensor.value_index` {#Tensor.value_index} - -The index of this tensor in the outputs of its `Operation`. - - -- - - - -#### `tf.Tensor.graph` {#Tensor.graph} - -The `Graph` that contains this tensor. - - -- - - - -#### `tf.Tensor.op` {#Tensor.op} - -The `Operation` that produces this tensor as an output. - - -- - - - -#### `tf.Tensor.consumers()` {#Tensor.consumers} - -Returns a list of `Operation`s that consume this tensor. - -##### Returns: - - A list of `Operation`s. - - - -- - - - -#### `tf.Tensor.eval(feed_dict=None, session=None)` {#Tensor.eval} - -Evaluates this tensor in a `Session`. - -Calling this method will execute all preceding operations that -produce the inputs needed for the operation that produces this -tensor. - -*N.B.* Before invoking `Tensor.eval()`, its graph must have been -launched in a session, and either a default session must be -available, or `session` must be specified explicitly. - -##### Args: - - -* `feed_dict`: A dictionary that maps `Tensor` objects to feed values. - See [`Session.run()`](../../api_docs/python/client.md#Session.run) for a - description of the valid feed values. -* `session`: (Optional.) The `Session` to be used to evaluate this tensor. If - none, the default session will be used. - -##### Returns: - - A numpy array corresponding to the value of this tensor. - - - -- - - - -#### `tf.Tensor.get_shape()` {#Tensor.get_shape} - -Returns the `TensorShape` that represents the shape of this tensor. - -The shape is computed using shape inference functions that are -registered for each `Operation` type using `tf.RegisterShape`. -See [`TensorShape`](../../api_docs/python/framework.md#TensorShape) for more -details of what a shape represents. - -The inferred shape of a tensor is used to provide shape -information without having to launch the graph in a session. This -can be used for debugging, and providing early error messages. For -example: - -```python -c = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]) - -print(c.get_shape()) -==> TensorShape([Dimension(2), Dimension(3)]) - -d = tf.constant([[1.0, 0.0], [0.0, 1.0], [1.0, 0.0], [0.0, 1.0]]) - -print(d.get_shape()) -==> TensorShape([Dimension(4), Dimension(2)]) - -# Raises a ValueError, because `c` and `d` do not have compatible -# inner dimensions. -e = tf.matmul(c, d) - -f = tf.matmul(c, d, transpose_a=True, transpose_b=True) - -print(f.get_shape()) -==> TensorShape([Dimension(3), Dimension(4)]) -``` - -In some cases, the inferred shape may have unknown dimensions. If -the caller has additional information about the values of these -dimensions, `Tensor.set_shape()` can be used to augment the -inferred shape. - -##### Returns: - - A `TensorShape` representing the shape of this tensor. - - -- - - - -#### `tf.Tensor.set_shape(shape)` {#Tensor.set_shape} - -Updates the shape of this tensor. - -This method can be called multiple times, and will merge the given -`shape` with the current shape of this tensor. It can be used to -provide additional information about the shape of this tensor that -cannot be inferred from the graph alone. For example, this can be used -to provide additional information about the shapes of images: - -```python -_, image_data = tf.TFRecordReader(...).read(...) -image = tf.image.decode_png(image_data, channels=3) - -# The height and width dimensions of `image` are data dependent, and -# cannot be computed without executing the op. -print(image.get_shape()) -==> TensorShape([Dimension(None), Dimension(None), Dimension(3)]) - -# We know that each image in this dataset is 28 x 28 pixels. -image.set_shape([28, 28, 3]) -print(image.get_shape()) -==> TensorShape([Dimension(28), Dimension(28), Dimension(3)]) -``` - -##### Args: - - -* `shape`: A `TensorShape` representing the shape of this tensor. - -##### Raises: - - -* `ValueError`: If `shape` is not compatible with the current shape of - this tensor. - - - -#### Other Methods -- - - - -#### `tf.Tensor.__init__(op, value_index, dtype)` {#Tensor.__init__} - -Creates a new `Tensor`. - -##### Args: - - -* `op`: An `Operation`. `Operation` that computes this tensor. -* `value_index`: An `int`. Index of the operation's endpoint that produces - this tensor. -* `dtype`: A `DType`. Type of elements stored in this tensor. - -##### Raises: - - -* `TypeError`: If the op is not an `Operation`. - - -- - - - -#### `tf.Tensor.device` {#Tensor.device} - -The name of the device on which this tensor will be produced, or None. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.Variable.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.Variable.md new file mode 100644 index 0000000000..b300fac583 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.Variable.md @@ -0,0 +1,460 @@ +See the [Variables How To](../../how_tos/variables/index.md) for a high +level overview. + +A variable maintains state in the graph across calls to `run()`. You add a +variable to the graph by constructing an instance of the class `Variable`. + +The `Variable()` constructor requires an initial value for the variable, +which can be a `Tensor` of any type and shape. The initial value defines the +type and shape of the variable. After construction, the type and shape of +the variable are fixed. The value can be changed using one of the assign +methods. + +If you want to change the shape of a variable later you have to use an +`assign` Op with `validate_shape=False`. + +Just like any `Tensor`, variables created with `Variable()` can be used as +inputs for other Ops in the graph. Additionally, all the operators +overloaded for the `Tensor` class are carried over to variables, so you can +also add nodes to the graph by just doing arithmetic on variables. + +```python +import tensorflow as tf + +# Create a variable. +w = tf.Variable(, name=) + +# Use the variable in the graph like any Tensor. +y = tf.matmul(w, ...another variable or tensor...) + +# The overloaded operators are available too. +z = tf.sigmoid(w + y) + +# Assign a new value to the variable with `assign()` or a related method. +w.assign(w + 1.0) +w.assign_add(1.0) +``` + +When you launch the graph, variables have to be explicitly initialized before +you can run Ops that use their value. You can initialize a variable by +running its *initializer op*, restoring the variable from a save file, or +simply running an `assign` Op that assigns a value to the variable. In fact, +the variable *initializer op* is just an `assign` Op that assigns the +variable's initial value to the variable itself. + +```python +# Launch the graph in a session. +with tf.Session() as sess: + # Run the variable initializer. + sess.run(w.initializer) + # ...you now can run ops that use the value of 'w'... +``` + +The most common initialization pattern is to use the convenience function +`initialize_all_variables()` to add an Op to the graph that initializes +all the variables. You then run that Op after launching the graph. + +```python +# Add an Op to initialize all variables. +init_op = tf.initialize_all_variables() + +# Launch the graph in a session. +with tf.Session() as sess: + # Run the Op that initializes all variables. + sess.run(init_op) + # ...you can now run any Op that uses variable values... +``` + +If you need to create a variable with an initial value dependent on another +variable, use the other variable's `initialized_value()`. This ensures that +variables are initialized in the right order. + +All variables are automatically collected in the graph where they are +created. By default, the constructor adds the new variable to the graph +collection `GraphKeys.VARIABLES`. The convenience function +`all_variables()` returns the contents of that collection. + +When building a machine learning model it is often convenient to distinguish +betwen variables holding the trainable model parameters and other variables +such as a `global step` variable used to count training steps. To make this +easier, the variable constructor supports a `trainable=` parameter. If +`True`, the new variable is also added to the graph collection +`GraphKeys.TRAINABLE_VARIABLES`. The convenience function +`trainable_variables()` returns the contents of this collection. The +various `Optimizer` classes use this collection as the default list of +variables to optimize. + + +Creating a variable. + +- - - + +#### `tf.Variable.__init__(initial_value=None, trainable=True, collections=None, validate_shape=True, caching_device=None, name=None, variable_def=None, dtype=None)` {#Variable.__init__} + +Creates a new variable with value `initial_value`. + +The new variable is added to the graph collections listed in `collections`, +which defaults to `[GraphKeys.VARIABLES]`. + +If `trainable` is `True` the variable is also added to the graph collection +`GraphKeys.TRAINABLE_VARIABLES`. + +This constructor creates both a `variable` Op and an `assign` Op to set the +variable to its initial value. + +##### Args: + + +* `initial_value`: A `Tensor`, or Python object convertible to a `Tensor`, + which is the initial value for the Variable. The initial value must have + a shape specified unless `validate_shape` is set to False. Can also be a + callable with no argument that returns the initial value when called. In + that case, `dtype` must be specified. (Note that initializer functions + from init_ops.py must first be bound to a shape before being used here.) +* `trainable`: If `True`, the default, also adds the variable to the graph + collection `GraphKeys.TRAINABLE_VARIABLES`. This collection is used as + the default list of variables to use by the `Optimizer` classes. +* `collections`: List of graph collections keys. The new variable is added to + these collections. Defaults to `[GraphKeys.VARIABLES]`. +* `validate_shape`: If `False`, allows the variable to be initialized with a + value of unknown shape. If `True`, the default, the shape of + `initial_value` must be known. +* `caching_device`: Optional device string describing where the Variable + should be cached for reading. Defaults to the Variable's device. + If not `None`, caches on another device. Typical use is to cache + on the device where the Ops using the Variable reside, to deduplicate + copying through `Switch` and other conditional statements. +* `name`: Optional name for the variable. Defaults to `'Variable'` and gets + uniquified automatically. +* `variable_def`: `VariableDef` protocol buffer. If not `None`, recreates + the Variable object with its contents. `variable_def` and the other + arguments are mutually exclusive. +* `dtype`: If set, initial_value will be converted to the given type. + If `None`, either the datatype will be kept (if `initial_value` is + a Tensor), or `convert_to_tensor` will decide. + +##### Returns: + + A Variable. + +##### Raises: + + +* `ValueError`: If both `variable_def` and initial_value are specified. +* `ValueError`: If the initial value is not specified, or does not have a + shape and `validate_shape` is `True`. + + +- - - + +#### `tf.Variable.initialized_value()` {#Variable.initialized_value} + +Returns the value of the initialized variable. + +You should use this instead of the variable itself to initialize another +variable with a value that depends on the value of this variable. + +```python +# Initialize 'v' with a random tensor. +v = tf.Variable(tf.truncated_normal([10, 40])) +# Use `initialized_value` to guarantee that `v` has been +# initialized before its value is used to initialize `w`. +# The random values are picked only once. +w = tf.Variable(v.initialized_value() * 2.0) +``` + +##### Returns: + + A `Tensor` holding the value of this variable after its initializer + has run. + + + +Changing a variable value. + +- - - + +#### `tf.Variable.assign(value, use_locking=False)` {#Variable.assign} + +Assigns a new value to the variable. + +This is essentially a shortcut for `assign(self, value)`. + +##### Args: + + +* `value`: A `Tensor`. The new value for this variable. +* `use_locking`: If `True`, use locking during the assignment. + +##### Returns: + + A `Tensor` that will hold the new value of this variable after + the assignment has completed. + + +- - - + +#### `tf.Variable.assign_add(delta, use_locking=False)` {#Variable.assign_add} + +Adds a value to this variable. + + This is essentially a shortcut for `assign_add(self, delta)`. + +##### Args: + + +* `delta`: A `Tensor`. The value to add to this variable. +* `use_locking`: If `True`, use locking during the operation. + +##### Returns: + + A `Tensor` that will hold the new value of this variable after + the addition has completed. + + +- - - + +#### `tf.Variable.assign_sub(delta, use_locking=False)` {#Variable.assign_sub} + +Subtracts a value from this variable. + +This is essentially a shortcut for `assign_sub(self, delta)`. + +##### Args: + + +* `delta`: A `Tensor`. The value to subtract from this variable. +* `use_locking`: If `True`, use locking during the operation. + +##### Returns: + + A `Tensor` that will hold the new value of this variable after + the subtraction has completed. + + +- - - + +#### `tf.Variable.scatter_sub(sparse_delta, use_locking=False)` {#Variable.scatter_sub} + +Subtracts `IndexedSlices` from this variable. + +This is essentially a shortcut for `scatter_sub(self, sparse_delta.indices, +sparse_delta.values)`. + +##### Args: + + +* `sparse_delta`: `IndexedSlices` to be subtracted from this variable. +* `use_locking`: If `True`, use locking during the operation. + +##### Returns: + + A `Tensor` that will hold the new value of this variable after + the scattered subtraction has completed. + +##### Raises: + + +* `ValueError`: if `sparse_delta` is not an `IndexedSlices`. + + +- - - + +#### `tf.Variable.count_up_to(limit)` {#Variable.count_up_to} + +Increments this variable until it reaches `limit`. + +When that Op is run it tries to increment the variable by `1`. If +incrementing the variable would bring it above `limit` then the Op raises +the exception `OutOfRangeError`. + +If no error is raised, the Op outputs the value of the variable before +the increment. + +This is essentially a shortcut for `count_up_to(self, limit)`. + +##### Args: + + +* `limit`: value at which incrementing the variable raises an error. + +##### Returns: + + A `Tensor` that will hold the variable value before the increment. If no + other Op modifies this variable, the values produced will all be + distinct. + + + +- - - + +#### `tf.Variable.eval(session=None)` {#Variable.eval} + +In a session, computes and returns the value of this variable. + +This is not a graph construction method, it does not add ops to the graph. + +This convenience method requires a session where the graph containing this +variable has been launched. If no session is passed, the default session is +used. See the [Session class](../../api_docs/python/client.md#Session) for +more information on launching a graph and on sessions. + +```python +v = tf.Variable([1, 2]) +init = tf.initialize_all_variables() + +with tf.Session() as sess: + sess.run(init) + # Usage passing the session explicitly. + print(v.eval(sess)) + # Usage with the default session. The 'with' block + # above makes 'sess' the default session. + print(v.eval()) +``` + +##### Args: + + +* `session`: The session to use to evaluate this variable. If + none, the default session is used. + +##### Returns: + + A numpy `ndarray` with a copy of the value of this variable. + + + +Properties. + +- - - + +#### `tf.Variable.name` {#Variable.name} + +The name of this variable. + + +- - - + +#### `tf.Variable.dtype` {#Variable.dtype} + +The `DType` of this variable. + + +- - - + +#### `tf.Variable.get_shape()` {#Variable.get_shape} + +The `TensorShape` of this variable. + +##### Returns: + + A `TensorShape`. + + +- - - + +#### `tf.Variable.device` {#Variable.device} + +The device of this variable. + + +- - - + +#### `tf.Variable.initializer` {#Variable.initializer} + +The initializer operation for this variable. + + +- - - + +#### `tf.Variable.graph` {#Variable.graph} + +The `Graph` of this variable. + + +- - - + +#### `tf.Variable.op` {#Variable.op} + +The `Operation` of this variable. + + + +#### Other Methods +- - - + +#### `tf.Variable.from_proto(variable_def)` {#Variable.from_proto} + +Returns a `Variable` object created from `variable_def`. + + +- - - + +#### `tf.Variable.initial_value` {#Variable.initial_value} + +Returns the Tensor used as the initial value for the variable. + +Note that this is different from `initialized_value()` which runs +the op that initializes the variable before returning its value. +This method returns the tensor that is used by the op that initializes +the variable. + +##### Returns: + + A `Tensor`. + + +- - - + +#### `tf.Variable.ref()` {#Variable.ref} + +Returns a reference to this variable. + +You usually do not need to call this method as all ops that need a reference +to the variable call it automatically. + +Returns is a `Tensor` which holds a reference to the variable. You can +assign a new value to the variable by passing the tensor to an assign op. +See [`value()`](#Variable.value) if you want to get the value of the +variable. + +##### Returns: + + A `Tensor` that is a reference to the variable. + + +- - - + +#### `tf.Variable.to_proto()` {#Variable.to_proto} + +Converts a `Variable` to a `VariableDef` protocol buffer. + +##### Returns: + + A `VariableDef` protocol buffer. + + +- - - + +#### `tf.Variable.value()` {#Variable.value} + +Returns the last snapshot of this variable. + +You usually do not need to call this method as all ops that need the value +of the variable call it automatically through a `convert_to_tensor()` call. + +Returns a `Tensor` which holds the value of the variable. You can not +assign a new value to this tensor as it is not a reference to the variable. +See [`ref()`](#Variable.ref) if you want to get a reference to the +variable. + +To avoid copies, if the consumer of the returned value is on the same device +as the variable, this actually returns the live value of the variable, not +a copy. Updates to the variable are seen by the consumer. If the consumer +is on a different device it will get a copy of the variable. + +##### Returns: + + A `Tensor` containing the value of the variable. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.add.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.add.md new file mode 100644 index 0000000000..738f0337d3 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.add.md @@ -0,0 +1,17 @@ +### `tf.add(x, y, name=None)` {#add} + +Returns x + y element-wise. + +*NOTE*: Add supports broadcasting. AddN does not. + +##### Args: + + +* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`, `string`. +* `y`: A `Tensor`. Must have the same type as `x`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `x`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.add_to_collection.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.add_to_collection.md deleted file mode 100644 index 1d8d752917..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.add_to_collection.md +++ /dev/null @@ -1,14 +0,0 @@ -### `tf.add_to_collection(name, value)` {#add_to_collection} - -Wrapper for `Graph.add_to_collection()` using the default graph. - -See [`Graph.add_to_collection()`](../../api_docs/python/framework.md#Graph.add_to_collection) -for more details. - -##### Args: - - -* `name`: The key for the collection. For example, the `GraphKeys` class - contains many standard names for collections. -* `value`: The value to add to the collection. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.as_dtype.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.as_dtype.md deleted file mode 100644 index 50a048aacb..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.as_dtype.md +++ /dev/null @@ -1,21 +0,0 @@ -### `tf.as_dtype(type_value)` {#as_dtype} - -Converts the given `type_value` to a `DType`. - -##### Args: - - -* `type_value`: A value that can be converted to a `tf.DType` - object. This may currently be a `tf.DType` object, a - [`DataType` enum](https://www.tensorflow.org/code/tensorflow/core/framework/types.proto), - a string type name, or a `numpy.dtype`. - -##### Returns: - - A `DType` corresponding to `type_value`. - -##### Raises: - - -* `TypeError`: If `type_value` cannot be converted to a `DType`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.assert_integer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.assert_integer.md deleted file mode 100644 index c75ba58765..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.assert_integer.md +++ /dev/null @@ -1,30 +0,0 @@ -### `tf.assert_integer(x, data=None, summarize=None, name=None)` {#assert_integer} - -Assert that `x` is of integer dtype. - -Example of adding a dependency to an operation: - -```python -with tf.control_dependencies([tf.assert_integer(x)]): - output = tf.reduce_sum(x) -``` - -Example of adding dependency to the tensor being checked: - -```python -x = tf.with_dependencies([tf.assert_integer(x)], x) -``` - -##### Args: - - -* `x`: `Tensor` whose basetype is integer and is not quantized. -* `data`: The tensors to print out if the condition is False. Defaults to - error message and first few entries of `x`. -* `summarize`: Print this many entries of each tensor. -* `name`: A name for this operation (optional). Defaults to "assert_integer". - -##### Returns: - - Op that raises `InvalidArgumentError` if `x == y` is False. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.assert_non_positive.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.assert_non_positive.md deleted file mode 100644 index 83eb36a95c..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.assert_non_positive.md +++ /dev/null @@ -1,34 +0,0 @@ -### `tf.assert_non_positive(x, data=None, summarize=None, name=None)` {#assert_non_positive} - -Assert the condition `x <= 0` holds element-wise. - -Example of adding a dependency to an operation: - -```python -with tf.control_dependencies([tf.assert_non_positive(x)]): - output = tf.reduce_sum(x) -``` - -Example of adding dependency to the tensor being checked: - -```python -x = tf.with_dependencies([tf.assert_non_positive(x)], x) -``` - -Non-positive means, for every element `x[i]` of `x`, we have `x[i] <= 0`. -If `x` is empty this is trivially satisfied. - -##### Args: - - -* `x`: Numeric `Tensor`. -* `data`: The tensors to print out if the condition is False. Defaults to - error message and first few entries of `x`. -* `summarize`: Print this many entries of each tensor. -* `name`: A name for this operation (optional). - Defaults to "assert_non_positive". - -##### Returns: - - Op raising `InvalidArgumentError` unless `x` is all non-positive. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.assert_rank.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.assert_rank.md new file mode 100644 index 0000000000..e8da009641 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.assert_rank.md @@ -0,0 +1,36 @@ +### `tf.assert_rank(x, rank, data=None, summarize=None, name=None)` {#assert_rank} + +Assert `x` has rank equal to `rank`. + +Example of adding a dependency to an operation: + +```python +with tf.control_dependencies([tf.assert_rank(x, 2)]): + output = tf.reduce_sum(x) +``` + +Example of adding dependency to the tensor being checked: + +```python +x = tf.with_dependencies([tf.assert_rank(x, 2)], x) +``` + +##### Args: + + +* `x`: Numeric `Tensor`. +* `rank`: Scalar integer `Tensor`. +* `data`: The tensors to print out if the condition is False. Defaults to + error message and first few entries of `x`. +* `summarize`: Print this many entries of each tensor. +* `name`: A name for this operation (optional). Defaults to "assert_rank". + +##### Returns: + + Op raising `InvalidArgumentError` unless `x` has specified rank. + +##### Raises: + + +* `ValueError`: If static checks determine `x` has wrong rank. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.batch_cholesky.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.batch_cholesky.md deleted file mode 100644 index 487680f50b..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.batch_cholesky.md +++ /dev/null @@ -1,20 +0,0 @@ -### `tf.batch_cholesky(input, name=None)` {#batch_cholesky} - -Calculates the Cholesky decomposition of a batch of square matrices. - -The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions -form square matrices, with the same constraints as the single matrix Cholesky -decomposition above. The output is a tensor of the same shape as the input -containing the Cholesky decompositions for all input submatrices `[..., :, :]`. - -##### Args: - - -* `input`: A `Tensor`. Must be one of the following types: `float64`, `float32`. - Shape is `[..., M, M]`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `input`. Shape is `[..., M, M]`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.batch_fft2d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.batch_fft2d.md new file mode 100644 index 0000000000..e7a2c7b943 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.batch_fft2d.md @@ -0,0 +1,18 @@ +### `tf.batch_fft2d(input, name=None)` {#batch_fft2d} + +Compute the 2-dimensional discrete Fourier Transform over the inner-most + +2 dimensions of `input`. + +##### Args: + + +* `input`: A `Tensor` of type `complex64`. A complex64 tensor. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `complex64`. + A complex64 tensor of the same shape as `input`. The inner-most 2 + dimensions of `input` are replaced with their 2D Fourier Transform. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.batch_matmul.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.batch_matmul.md deleted file mode 100644 index a4764435b8..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.batch_matmul.md +++ /dev/null @@ -1,41 +0,0 @@ -### `tf.batch_matmul(x, y, adj_x=None, adj_y=None, name=None)` {#batch_matmul} - -Multiplies slices of two tensors in batches. - -Multiplies all slices of `Tensor` `x` and `y` (each slice can be -viewed as an element of a batch), and arranges the individual results -in a single output tensor of the same batch size. Each of the -individual slices can optionally be adjointed (to adjoint a matrix -means to transpose and conjugate it) before multiplication by setting -the `adj_x` or `adj_y` flag to `True`, which are by default `False`. - -The input tensors `x` and `y` are 3-D or higher with shape `[..., r_x, c_x]` -and `[..., r_y, c_y]`. - -The output tensor is 3-D or higher with shape `[..., r_o, c_o]`, where: - - r_o = c_x if adj_x else r_x - c_o = r_y if adj_y else c_y - -It is computed as: - - output[..., :, :] = matrix(x[..., :, :]) * matrix(y[..., :, :]) - -##### Args: - - -* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `complex64`, `complex128`. - 3-D or higher with shape `[..., r_x, c_x]`. -* `y`: A `Tensor`. Must have the same type as `x`. - 3-D or higher with shape `[..., r_y, c_y]`. -* `adj_x`: An optional `bool`. Defaults to `False`. - If `True`, adjoint the slices of `x`. Defaults to `False`. -* `adj_y`: An optional `bool`. Defaults to `False`. - If `True`, adjoint the slices of `y`. Defaults to `False`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `x`. - 3-D or higher with shape `[..., r_o, c_o]` - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.batch_matrix_determinant.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.batch_matrix_determinant.md new file mode 100644 index 0000000000..83f9503a4d --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.batch_matrix_determinant.md @@ -0,0 +1,19 @@ +### `tf.batch_matrix_determinant(input, name=None)` {#batch_matrix_determinant} + +Calculates the determinants for a batch of square matrices. + +The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions +form square matrices. The output is a 1-D tensor containing the determinants +for all input submatrices `[..., :, :]`. + +##### Args: + + +* `input`: A `Tensor`. Must be one of the following types: `float32`, `float64`. + Shape is `[..., M, M]`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `input`. Shape is `[...]`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.case.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.case.md new file mode 100644 index 0000000000..9314837b8e --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.case.md @@ -0,0 +1,75 @@ +### `tf.case(pred_fn_pairs, default, exclusive=False, name='case')` {#case} + +Create a case operation. + +The `pred_fn_pairs` parameter is a dict or list of pairs of size N. +Each pair contains a boolean scalar tensor and a python callable that +creates the tensors to be returned if the boolean evaluates to True. +`default` is a callable generating a list of tensors. All the callables +in `pred_fn_pairs` as well as `default` should return the same number +and types of tensors. + +If `exclusive==True`, all predicates are evaluated, and a logging operation +with an error is returned if more than one of the predicates evaluates to +True. If `exclusive==False`, execution stops are the first predicate which +evaluates to True, and the tensors generated by the corresponding function +are returned immediately. If none of the predicates evaluate to True, this +operation returns the tensors generated by `default`. + +Example 1: + Pseudocode: + ``` + if (x < y) return 17; + else return 23; + ``` + + Expressions: + ``` + f1 = lambda: tf.constant(17) + f2 = lambda: tf.constant(23) + r = case([(tf.less(x, y), f1)], default=f2) + ``` + +Example 2: + Pseudocode: + ``` + if (x < y && x > z) raise OpError("Only one predicate may evaluate true"); + if (x < y) return 17; + else if (x > z) return 23; + else return -1; + ``` + + Expressions: + ``` + x = tf.constant(0) + y = tf.constant(1) + z = tf.constant(2) + def f1(): return tf.constant(17) + def f2(): return tf.constant(23) + def f3(): return tf.constant(-1) + r = case({tf.less(x, y): f1, tf.greater(x, z): f2}, + default=f3, exclusive=True) + ``` + +##### Args: + + +* `pred_fn_pairs`: Dict or list of pairs of a boolean scalar tensor and a + callable which returns a list of tensors. +* `default`: A callable that returns a list of tensors. +* `exclusive`: True iff more than one predicate is allowed to evaluate to True. +* `name`: A name for this operation (optional). + +##### Returns: + + The tensors returned by the first pair whose predicate evaluated to True, or + those returned by `default` if none does. + +##### Raises: + + +* `TypeError`: If `pred_fn_pairs` is not a list/dictionary. +* `TypeError`: If `pred_fn_pairs` is a list but does not contain 2-tuples. +* `TypeError`: If `fns[i]` is not callable for any i, or `default` is not + callable. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.clip_by_value.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.clip_by_value.md deleted file mode 100644 index 7cd7e0311e..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.clip_by_value.md +++ /dev/null @@ -1,21 +0,0 @@ -### `tf.clip_by_value(t, clip_value_min, clip_value_max, name=None)` {#clip_by_value} - -Clips tensor values to a specified min and max. - -Given a tensor `t`, this operation returns a tensor of the same type and -shape as `t` with its values clipped to `clip_value_min` and `clip_value_max`. -Any values less than `clip_value_min` are set to `clip_value_min`. Any values -greater than `clip_value_max` are set to `clip_value_max`. - -##### Args: - - -* `t`: A `Tensor`. -* `clip_value_min`: A 0-D (scalar) `Tensor`. The minimum value to clip by. -* `clip_value_max`: A 0-D (scalar) `Tensor`. The maximum value to clip by. -* `name`: A name for the operation (optional). - -##### Returns: - - A clipped `Tensor`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.copy_graph.copy_op_to_graph.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.copy_graph.copy_op_to_graph.md new file mode 100644 index 0000000000..d549132fa2 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.copy_graph.copy_op_to_graph.md @@ -0,0 +1,29 @@ +### `tf.contrib.copy_graph.copy_op_to_graph(org_instance, to_graph, variables, scope='')` {#copy_op_to_graph} + +Given an `Operation` 'org_instance` from one `Graph`, +initializes and returns a copy of it from another `Graph`, +under the specified scope (default `""`). + +The copying is done recursively, so any `Operation` whose output +is required to evaluate the `org_instance`, is also copied (unless +already done). + +Since `Variable` instances are copied separately, those required +to evaluate `org_instance` must be provided as input. + +Args: +org_instance: An `Operation` from some `Graph`. Could be a + `Placeholder` as well. +to_graph: The `Graph` to copy `org_instance` to. +variables: An iterable of `Variable` instances to copy `org_instance` to. +scope: A scope for the new `Variable` (default `""`). + +##### Returns: + + The copied `Operation` from `to_graph`. + +##### Raises: + + +* `TypeError`: If `org_instance` is not an `Operation` or `Tensor`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.copy_graph.copy_variable_to_graph.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.copy_graph.copy_variable_to_graph.md deleted file mode 100644 index 85e336a29b..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.copy_graph.copy_variable_to_graph.md +++ /dev/null @@ -1,20 +0,0 @@ -### `tf.contrib.copy_graph.copy_variable_to_graph(org_instance, to_graph, scope='')` {#copy_variable_to_graph} - -Given a `Variable` instance from one `Graph`, initializes and returns -a copy of it from another `Graph`, under the specified scope -(default `""`). - -Args: -org_instance: A `Variable` from some `Graph`. -to_graph: The `Graph` to copy the `Variable` to. -scope: A scope for the new `Variable` (default `""`). - -##### Returns: - - The copied `Variable` from `to_graph`. - -##### Raises: - - -* `TypeError`: If `org_instance` is not a `Variable`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.distributions.DiscreteDistribution.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.distributions.DiscreteDistribution.md deleted file mode 100644 index 6e78e38ebe..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.distributions.DiscreteDistribution.md +++ /dev/null @@ -1,139 +0,0 @@ -Base class for discrete probability distributions. - -`DiscreteDistribution` defines the API for the likelihood functions `pmf` and -`log_pmf` of discrete probability distributions. - -Subclasses must override both `pmf` and `log_pmf` but one can call this base -class's implementation. - -See `BaseDistribution` for more information on the API for probability -distributions. -- - - - -#### `tf.contrib.distributions.DiscreteDistribution.batch_shape(name=None)` {#DiscreteDistribution.batch_shape} - -Batch dimensions of this instance as a 1-D int32 `Tensor`. - -The product of the dimensions of the `batch_shape` is the number of -independent distributions of this kind the instance represents. - -##### Args: - - -* `name`: name to give to the op - -##### Returns: - - `Tensor` `batch_shape` - - -- - - - -#### `tf.contrib.distributions.DiscreteDistribution.cdf(value, name='cdf')` {#DiscreteDistribution.cdf} - -Cumulative distribution function. - - -- - - - -#### `tf.contrib.distributions.DiscreteDistribution.dtype` {#DiscreteDistribution.dtype} - -dtype of samples from this distribution. - - -- - - - -#### `tf.contrib.distributions.DiscreteDistribution.entropy(name=None)` {#DiscreteDistribution.entropy} - -Entropy of the distribution in nats. - - -- - - - -#### `tf.contrib.distributions.DiscreteDistribution.event_shape(name=None)` {#DiscreteDistribution.event_shape} - -Shape of a sample from a single distribution as a 1-D int32 `Tensor`. - -##### Args: - - -* `name`: name to give to the op - -##### Returns: - - `Tensor` `event_shape` - - -- - - - -#### `tf.contrib.distributions.DiscreteDistribution.get_batch_shape()` {#DiscreteDistribution.get_batch_shape} - -`TensorShape` available at graph construction time. - -Same meaning as `batch_shape`. May be only partially defined. - - -- - - - -#### `tf.contrib.distributions.DiscreteDistribution.get_event_shape()` {#DiscreteDistribution.get_event_shape} - -`TensorShape` available at graph construction time. - -Same meaning as `event_shape`. May be only partially defined. - - -- - - - -#### `tf.contrib.distributions.DiscreteDistribution.log_cdf(value, name='log_cdf')` {#DiscreteDistribution.log_cdf} - -Log CDF. - - -- - - - -#### `tf.contrib.distributions.DiscreteDistribution.log_pmf(value, name='log_pmf')` {#DiscreteDistribution.log_pmf} - -Log of the probability mass function. - - -- - - - -#### `tf.contrib.distributions.DiscreteDistribution.mean` {#DiscreteDistribution.mean} - - - - -- - - - -#### `tf.contrib.distributions.DiscreteDistribution.name` {#DiscreteDistribution.name} - -Name to prepend to all ops. - - -- - - - -#### `tf.contrib.distributions.DiscreteDistribution.pmf(value, name='pmf')` {#DiscreteDistribution.pmf} - -Probability mass function. - - -- - - - -#### `tf.contrib.distributions.DiscreteDistribution.sample(n, seed=None, name=None)` {#DiscreteDistribution.sample} - -Generate `n` samples. - -##### Args: - - -* `n`: scalar. Number of samples to draw from each distribution. -* `seed`: Python integer seed for RNG -* `name`: name to give to the op. - -##### Returns: - - -* `samples`: a `Tensor` of shape `(n,) + self.batch_shape + self.event_shape` - with values of type `self.dtype`. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.distributions.StudentT.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.distributions.StudentT.md deleted file mode 100644 index 816e5d5a83..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.distributions.StudentT.md +++ /dev/null @@ -1,245 +0,0 @@ -Student's t distribution with degree-of-freedom parameter df. - -#### Mathematical details - -The PDF of this distribution is: - -`f(t) = gamma((df+1)/2)/sqrt(df*pi)/gamma(df/2)*(1+t^2/df)^(-(df+1)/2)` - -#### Examples - -Examples of initialization of one or a batch of distributions. - -```python -# Define a single scalar Student t distribution. -single_dist = tf.contrib.distributions.StudentT(df=3) - -# Evaluate the pdf at 1, returning a scalar Tensor. -single_dist.pdf(1.) - -# Define a batch of two scalar valued Student t's. -# The first has degrees of freedom 2, mean 1, and scale 11. -# The second 3, 2 and 22. -multi_dist = tf.contrib.distributions.StudentT(df=[2, 3], - mu=[1, 2.], - sigma=[11, 22.]) - -# Evaluate the pdf of the first distribution on 0, and the second on 1.5, -# returning a length two tensor. -multi_dist.pdf([0, 1.5]) - -# Get 3 samples, returning a 3 x 2 tensor. -multi_dist.sample(3) -``` - -Arguments are broadcast when possible. - -```python -# Define a batch of two Student's t distributions. -# Both have df 2 and mean 1, but different scales. -dist = tf.contrib.distributions.StudentT(df=2, mu=1, sigma=[11, 22.]) - -# Evaluate the pdf of both distributions on the same point, 3.0, -# returning a length 2 tensor. -dist.pdf(3.0) -``` -- - - - -#### `tf.contrib.distributions.StudentT.__init__(df, mu, sigma, name='StudentT')` {#StudentT.__init__} - -Construct Student's t distributions. - -The distributions have degree of freedom `df`, mean `mu`, and scale `sigma`. - -The parameters `df`, `mu`, and `sigma` must be shaped in a way that supports -broadcasting (e.g. `df + mu + sigma` is a valid operation). - -##### Args: - - -* `df`: `float` or `double` tensor, the degrees of freedom of the - distribution(s). `df` must contain only positive values. -* `mu`: `float` or `double` tensor, the means of the distribution(s). -* `sigma`: `float` or `double` tensor, the scaling factor for the - distribution(s). `sigma` must contain only positive values. - Note that `sigma` is not the standard deviation of this distribution. -* `name`: The name to give Ops created by the initializer. - -##### Raises: - - -* `TypeError`: if mu and sigma are different dtypes. - - -- - - - -#### `tf.contrib.distributions.StudentT.batch_shape(name='batch_shape')` {#StudentT.batch_shape} - - - - -- - - - -#### `tf.contrib.distributions.StudentT.cdf(value, name='cdf')` {#StudentT.cdf} - -Cumulative distribution function. - - -- - - - -#### `tf.contrib.distributions.StudentT.df` {#StudentT.df} - -Degrees of freedom in these Student's t distribution(s). - - -- - - - -#### `tf.contrib.distributions.StudentT.dtype` {#StudentT.dtype} - - - - -- - - - -#### `tf.contrib.distributions.StudentT.entropy(name='entropy')` {#StudentT.entropy} - -The entropy of Student t distribution(s). - -##### Args: - - -* `name`: The name to give this op. - -##### Returns: - - -* `entropy`: tensor of dtype `dtype`, the entropy. - - -- - - - -#### `tf.contrib.distributions.StudentT.event_shape(name='event_shape')` {#StudentT.event_shape} - - - - -- - - - -#### `tf.contrib.distributions.StudentT.get_batch_shape()` {#StudentT.get_batch_shape} - - - - -- - - - -#### `tf.contrib.distributions.StudentT.get_event_shape()` {#StudentT.get_event_shape} - - - - -- - - - -#### `tf.contrib.distributions.StudentT.is_reparameterized` {#StudentT.is_reparameterized} - - - - -- - - - -#### `tf.contrib.distributions.StudentT.log_cdf(value, name='log_cdf')` {#StudentT.log_cdf} - -Log CDF. - - -- - - - -#### `tf.contrib.distributions.StudentT.log_pdf(x, name='log_pdf')` {#StudentT.log_pdf} - -Log pdf of observations in `x` under these Student's t-distribution(s). - -##### Args: - - -* `x`: tensor of dtype `dtype`, must be broadcastable with `mu` and `df`. -* `name`: The name to give this op. - -##### Returns: - - -* `log_pdf`: tensor of dtype `dtype`, the log-PDFs of `x`. - - -- - - - -#### `tf.contrib.distributions.StudentT.mean` {#StudentT.mean} - - - - -- - - - -#### `tf.contrib.distributions.StudentT.mu` {#StudentT.mu} - -Locations of these Student's t distribution(s). - - -- - - - -#### `tf.contrib.distributions.StudentT.name` {#StudentT.name} - - - - -- - - - -#### `tf.contrib.distributions.StudentT.pdf(x, name='pdf')` {#StudentT.pdf} - -The PDF of observations in `x` under these Student's t distribution(s). - -##### Args: - - -* `x`: tensor of dtype `dtype`, must be broadcastable with `df`, `mu`, and - `sigma`. -* `name`: The name to give this op. - -##### Returns: - - -* `pdf`: tensor of dtype `dtype`, the pdf values of `x`. - - -- - - - -#### `tf.contrib.distributions.StudentT.sample(n, seed=None, name='sample')` {#StudentT.sample} - -Sample `n` observations from the Student t Distributions. - -##### Args: - - -* `n`: `Scalar`, type int32, the number of observations to sample. -* `seed`: Python integer, the random seed. -* `name`: The name to give this op. - -##### Returns: - - -* `samples`: a `Tensor` of shape `(n,) + self.batch_shape + self.event_shape` - with values of type `self.dtype`. - - -- - - - -#### `tf.contrib.distributions.StudentT.sigma` {#StudentT.sigma} - -Scaling factors of these Student's t distribution(s). - - -- - - - -#### `tf.contrib.distributions.StudentT.variance` {#StudentT.variance} - - - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.ffmpeg.encode_audio.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.ffmpeg.encode_audio.md new file mode 100644 index 0000000000..fb9d958f26 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.ffmpeg.encode_audio.md @@ -0,0 +1,19 @@ +### `tf.contrib.ffmpeg.encode_audio(audio, file_format=None, samples_per_second=None)` {#encode_audio} + +Creates an op that encodes an audio file using sampled audio from a tensor. + +##### Args: + + +* `audio`: A rank 2 tensor that has time along dimension 0 and channels along + dimension 1. Dimension 0 is `samples_per_second * length` long in + seconds. +* `file_format`: The type of file to encode. "wav" is the only supported format. +* `samples_per_second`: The number of samples in the audio tensor per second of + audio. + +##### Returns: + + A scalar tensor that contains the encoded audio in the specified file + format. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.layers.convolution2d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.layers.convolution2d.md deleted file mode 100644 index f296166c9c..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.layers.convolution2d.md +++ /dev/null @@ -1,43 +0,0 @@ -### `tf.contrib.layers.convolution2d(*args, **kwargs)` {#convolution2d} - -Adds a 2D convolution followed by an optional batch_norm layer. - -`convolution2d` creates a variable called `weights`, representing the -convolutional kernel, that is convolved with the `inputs` to produce a -`Tensor` of activations. If a `normalizer_fn` is provided (such as -`batch_norm`), it is then applied. Otherwise, if `normalizer_fn` is -None and a `biases_initializer` is provided then a `biases` variable would be -created and added the activations. Finally, if `activation_fn` is not `None`, -it is applied to the activations as well. - -##### Args: - - -* `inputs`: a 4-D tensor `[batch_size, height, width, channels]`. -* `num_outputs`: integer, the number of output filters. -* `kernel_size`: a list of length 2 `[kernel_height, kernel_width]` of - of the filters. Can be an int if both values are the same. -* `stride`: a list of length 2 `[stride_height, stride_width]`. - Can be an int if both strides are the same. Note that presently - both strides must have the same value. -* `padding`: one of `VALID` or `SAME`. -* `activation_fn`: activation function. -* `normalizer_fn`: normalization function to use instead of `biases`. If - `normalize_fn` is provided then `biases_initializer` and - `biases_regularizer` are ignored and `biases` are not created nor added. -* `normalizer_params`: normalization function parameters. -* `weights_initializer`: An initializer for the weights. -* `weights_regularizer`: Optional regularizer for the weights. -* `biases_initializer`: An initializer for the biases. If None skip biases. -* `biases_regularizer`: Optional regularizer for the biases. -* `reuse`: whether or not the layer and its variables should be reused. To be - able to reuse the layer scope must be given. -* `variables_collections`: optional list of collections for all the variables or - a dictionay containing a different list of collection per variable. -* `outputs_collections`: collection to add the outputs. -* `scope`: Optional scope for `variable_op_scope`. - -##### Returns: - - a tensor representing the output of the operation. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.layers.summarize_tensor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.layers.summarize_tensor.md deleted file mode 100644 index 872ba5c9d4..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.layers.summarize_tensor.md +++ /dev/null @@ -1,18 +0,0 @@ -### `tf.contrib.layers.summarize_tensor(tensor, tag=None)` {#summarize_tensor} - -Summarize a tensor using a suitable summary type. - -This function adds a summary op for `tensor`. The type of summary depends on -the shape of `tensor`. For scalars, a `scalar_summary` is created, for all -other tensors, `histogram_summary` is used. - -##### Args: - - -* `tensor`: The tensor to summarize -* `tag`: The tag to use, if None then use tensor's op's name. - -##### Returns: - - The summary op created or None for string tensors. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.layers.variance_scaling_initializer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.layers.variance_scaling_initializer.md new file mode 100644 index 0000000000..c82f924432 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.layers.variance_scaling_initializer.md @@ -0,0 +1,47 @@ +### `tf.contrib.layers.variance_scaling_initializer(factor=2.0, mode='FAN_IN', uniform=False, seed=None, dtype=tf.float32)` {#variance_scaling_initializer} + +Returns an initializer that generates tensors without scaling variance. + +When initializing a deep network, it is in principle advantageous to keep +the scale of the input variance constant, so it does not explode or diminish +by reaching the final layer. This initializer use the following formula: + if mode='FAN_IN': # Count only number of input connections. + n = fan_in + elif mode='FAN_OUT': # Count only number of output connections. + n = fan_out + elif mode='FAN_AVG': # Average number of inputs and output connections. + n = (fan_in + fan_out)/2.0 + + truncated_normal(shape, 0.0, stddev=sqrt(factor / n)) + +To get http://arxiv.org/pdf/1502.01852v1.pdf use (Default): + - factor=2.0 mode='FAN_IN' uniform=False +To get http://arxiv.org/abs/1408.5093 use: + - factor=1.0 mode='FAN_IN' uniform=True +To get http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf use: + - factor=1.0 mode='FAN_AVG' uniform=True. +To get xavier_initializer use either: + - factor=1.0 mode='FAN_AVG' uniform=True. + - factor=1.0 mode='FAN_AVG' uniform=False. + +##### Args: + + +* `factor`: Float. A multiplicative factor. +* `mode`: String. 'FAN_IN', 'FAN_OUT', 'FAN_AVG'. +* `uniform`: Whether to use uniform or normal distributed random initialization. +* `seed`: A Python integer. Used to create random seeds. See + [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) + for behavior. +* `dtype`: The data type. Only floating point types are supported. + +##### Returns: + + An initializer that generates tensors with unit variance. + +##### Raises: + + +* `ValueError`: if `dtype` is not a floating point type. +* `TypeError`: if `mode` is not in ['FAN_IN', 'FAN_OUT', 'FAN_AVG']. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.Estimator.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.Estimator.md new file mode 100644 index 0000000000..00f12fa0a1 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.Estimator.md @@ -0,0 +1,215 @@ +Estimator class is the basic TensorFlow model trainer/evaluator. + +Parameters: + model_fn: Model function, takes features and targets tensors or dicts of + tensors and returns predictions and loss tensors. + E.g. `(features, targets) -> (predictions, loss)`. + model_dir: Directory to save model parameters, graph and etc. + classification: boolean, true if classification problem. + learning_rate: learning rate for the model. + optimizer: optimizer for the model, can be: + string: name of optimizer, like 'SGD', 'Adam', 'Adagrad', 'Ftl', + 'Momentum', 'RMSProp', 'Momentum'). + Full list in contrib/layers/optimizers.py + class: sub-class of Optimizer + (like tf.train.GradientDescentOptimizer). + clip_gradients: clip_norm value for call to `clip_by_global_norm`. None + denotes no gradient clipping. + config: Configuration object. +- - - + +#### `tf.contrib.learn.Estimator.__init__(model_fn=None, model_dir=None, classification=True, learning_rate=0.1, optimizer='Adagrad', clip_gradients=None, config=None)` {#Estimator.__init__} + + + + +- - - + +#### `tf.contrib.learn.Estimator.evaluate(x=None, y=None, input_fn=None, feed_fn=None, batch_size=32, steps=None, metrics=None, name=None)` {#Estimator.evaluate} + +Evaluates given model with provided evaluation data. + +##### Args: + + +* `x`: features. +* `y`: targets. +* `input_fn`: Input function. If set, x and y must be None. +* `feed_fn`: Function creating a feed dict every time it is called. Called + once per iteration. +* `batch_size`: minibatch size to use on the input, defaults to 32. Ignored + if input_fn is set. +* `steps`: Number of steps to evalute for. +* `metrics`: Dict of metric ops to run. If None, the default metric functions + are used; if {}, no metrics are used. +* `name`: Name of the evaluation if user needs to run multiple evaluation on + different data sets, such as evaluate on training data vs test data. + +##### Returns: + + Returns self. + +##### Raises: + + +* `ValueError`: If x or y are not None while input_fn or feed_fn is not None. + + +- - - + +#### `tf.contrib.learn.Estimator.fit(x, y, steps, batch_size=32, monitors=None)` {#Estimator.fit} + +Trains a model given training data X and y. + +##### Args: + + +* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be + iterator that returns arrays of features. The training input + samples for fitting the model. +* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be + iterator that returns array of targets. The training target values + (class labels in classification, real numbers in regression). +* `steps`: number of steps to train model for. +* `batch_size`: minibatch size to use on the input, defaults to 32. +* `monitors`: List of `BaseMonitor` subclass instances. Used for callbacks + inside the training loop. + +##### Returns: + + Returns self. + + +- - - + +#### `tf.contrib.learn.Estimator.get_params(deep=True)` {#Estimator.get_params} + +Get parameters for this estimator. + +##### Args: + + +* `deep`: boolean, optional + If True, will return the parameters for this estimator and + contained subobjects that are estimators. + +##### Returns: + + params : mapping of string to any + Parameter names mapped to their values. + + +- - - + +#### `tf.contrib.learn.Estimator.model_dir` {#Estimator.model_dir} + + + + +- - - + +#### `tf.contrib.learn.Estimator.partial_fit(x, y, steps=1, batch_size=32, monitors=None)` {#Estimator.partial_fit} + +Incremental fit on a batch of samples. + +This method is expected to be called several times consecutively +on different or the same chunks of the dataset. This either can +implement iterative training or out-of-core/online training. + +This is especially useful when the whole dataset is too big to +fit in memory at the same time. Or when model is taking long time +to converge, and you want to split up training into subparts. + +##### Args: + + +* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be + iterator that returns arrays of features. The training input + samples for fitting the model. +* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be + iterator that returns array of targets. The training target values + (class label in classification, real numbers in regression). +* `steps`: number of steps to train model for. +* `batch_size`: minibatch size to use on the input, defaults to 32. +* `monitors`: List of `BaseMonitor` subclass instances. Used for callbacks + inside the training loop. + +##### Returns: + + Returns self. + + +- - - + +#### `tf.contrib.learn.Estimator.predict(x=None, input_fn=None, axis=None, batch_size=None)` {#Estimator.predict} + +Returns predictions for given features. + +##### Args: + + +* `x`: features. +* `input_fn`: Input function. If set, x must be None. +* `axis`: Axis on which to argmax (for classification). + Last axis is used by default. +* `batch_size`: Override default batch size. + +##### Returns: + + Numpy array of predicted classes or regression values. + + +- - - + +#### `tf.contrib.learn.Estimator.predict_proba(x=None, input_fn=None, batch_size=None)` {#Estimator.predict_proba} + +Returns prediction probabilities for given features (classification). + +##### Args: + + +* `x`: features. +* `input_fn`: Input function. If set, x and y must be None. +* `batch_size`: Override default batch size. + +##### Returns: + + Numpy array of predicted probabilities. + + +- - - + +#### `tf.contrib.learn.Estimator.set_params(**params)` {#Estimator.set_params} + +Set the parameters of this estimator. + +The method works on simple estimators as well as on nested objects +(such as pipelines). The former have parameters of the form +``__`` so that it's possible to update each +component of a nested object. + +##### Returns: + + self + + +- - - + +#### `tf.contrib.learn.Estimator.train(input_fn, steps, monitors=None)` {#Estimator.train} + +Trains a model given input builder function. + +##### Args: + + +* `input_fn`: Input builder function, returns tuple of dicts or + dict and Tensor. +* `steps`: number of steps to train model for. +* `monitors`: List of `BaseMonitor` subclass instances. Used for callbacks + inside the training loop. + +##### Returns: + + Returns self. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.NanLossDuringTrainingError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.NanLossDuringTrainingError.md new file mode 100644 index 0000000000..8b13789179 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.NanLossDuringTrainingError.md @@ -0,0 +1 @@ + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.TensorFlowDNNRegressor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.TensorFlowDNNRegressor.md new file mode 100644 index 0000000000..182b81de75 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.TensorFlowDNNRegressor.md @@ -0,0 +1,302 @@ +TensorFlow DNN Regressor model. + +Parameters: + hidden_units: List of hidden units per layer. + batch_size: Mini batch size. + steps: Number of steps to run over data. + optimizer: Optimizer name (or class), for example "SGD", "Adam", "Adagrad". + learning_rate: If this is constant float value, no decay function is + used. Instead, a customized decay function can be passed that accepts + global_step as parameter and returns a Tensor. + e.g. exponential decay function: + def exp_decay(global_step): + return tf.train.exponential_decay( + learning_rate=0.1, global_step, + decay_steps=2, decay_rate=0.001) + continue_training: when continue_training is True, once initialized + model will be continuely trained on every call of fit. + config: RunConfig object that controls the configurations of the session, + e.g. num_cores, gpu_memory_fraction, etc. + verbose: Controls the verbosity, possible values: + 0: the algorithm and debug information is muted. + 1: trainer prints the progress. + 2: log device placement is printed. + dropout: When not None, the probability we will drop out a given coordinate. +- - - + +#### `tf.contrib.learn.TensorFlowDNNRegressor.__init__(hidden_units, n_classes=0, batch_size=32, steps=200, optimizer='Adagrad', learning_rate=0.1, clip_gradients=5.0, continue_training=False, config=None, verbose=1, dropout=None)` {#TensorFlowDNNRegressor.__init__} + + + + +- - - + +#### `tf.contrib.learn.TensorFlowDNNRegressor.bias_` {#TensorFlowDNNRegressor.bias_} + +Returns bias of the DNN's bias layers. + + +- - - + +#### `tf.contrib.learn.TensorFlowDNNRegressor.evaluate(x=None, y=None, input_fn=None, steps=None)` {#TensorFlowDNNRegressor.evaluate} + +See base class. + + +- - - + +#### `tf.contrib.learn.TensorFlowDNNRegressor.fit(x, y, steps=None, monitors=None, logdir=None)` {#TensorFlowDNNRegressor.fit} + +Builds a neural network model given provided `model_fn` and training +data X and y. + +Note: called first time constructs the graph and initializers +variables. Consecutives times it will continue training the same model. +This logic follows partial_fit() interface in scikit-learn. + +To restart learning, create new estimator. + +##### Args: + + +* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be + iterator that returns arrays of features. The training input + samples for fitting the model. + +* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be + iterator that returns array of targets. The training target values + (class labels in classification, real numbers in regression). + +* `steps`: int, number of steps to train. + If None or 0, train for `self.steps`. +* `monitors`: List of `BaseMonitor` objects to print training progress and + invoke early stopping. +* `logdir`: the directory to save the log file that can be used for + optional visualization. + +##### Returns: + + Returns self. + + +- - - + +#### `tf.contrib.learn.TensorFlowDNNRegressor.get_params(deep=True)` {#TensorFlowDNNRegressor.get_params} + +Get parameters for this estimator. + +##### Args: + + +* `deep`: boolean, optional + If True, will return the parameters for this estimator and + contained subobjects that are estimators. + +##### Returns: + + params : mapping of string to any + Parameter names mapped to their values. + + +- - - + +#### `tf.contrib.learn.TensorFlowDNNRegressor.get_tensor(name)` {#TensorFlowDNNRegressor.get_tensor} + +Returns tensor by name. + +##### Args: + + +* `name`: string, name of the tensor. + +##### Returns: + + Tensor. + + +- - - + +#### `tf.contrib.learn.TensorFlowDNNRegressor.get_tensor_value(name)` {#TensorFlowDNNRegressor.get_tensor_value} + +Returns value of the tensor give by name. + +##### Args: + + +* `name`: string, name of the tensor. + +##### Returns: + + Numpy array - value of the tensor. + + +- - - + +#### `tf.contrib.learn.TensorFlowDNNRegressor.get_variable_names()` {#TensorFlowDNNRegressor.get_variable_names} + +Returns list of all variable names in this model. + +##### Returns: + + List of names. + + +- - - + +#### `tf.contrib.learn.TensorFlowDNNRegressor.model_dir` {#TensorFlowDNNRegressor.model_dir} + + + + +- - - + +#### `tf.contrib.learn.TensorFlowDNNRegressor.partial_fit(x, y)` {#TensorFlowDNNRegressor.partial_fit} + +Incremental fit on a batch of samples. + +This method is expected to be called several times consecutively +on different or the same chunks of the dataset. This either can +implement iterative training or out-of-core/online training. + +This is especially useful when the whole dataset is too big to +fit in memory at the same time. Or when model is taking long time +to converge, and you want to split up training into subparts. + +##### Args: + + +* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be + iterator that returns arrays of features. The training input + samples for fitting the model. + +* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be + iterator that returns array of targets. The training target values + (class label in classification, real numbers in regression). + +##### Returns: + + Returns self. + + +- - - + +#### `tf.contrib.learn.TensorFlowDNNRegressor.predict(x, axis=1, batch_size=None)` {#TensorFlowDNNRegressor.predict} + +Predict class or regression for X. + +For a classification model, the predicted class for each sample in X is +returned. For a regression model, the predicted value based on X is +returned. + +##### Args: + + +* `x`: array-like matrix, [n_samples, n_features...] or iterator. +* `axis`: Which axis to argmax for classification. + By default axis 1 (next after batch) is used. + Use 2 for sequence predictions. +* `batch_size`: If test set is too big, use batch size to split + it into mini batches. By default the batch_size member + variable is used. + +##### Returns: + + +* `y`: array of shape [n_samples]. The predicted classes or predicted + value. + + +- - - + +#### `tf.contrib.learn.TensorFlowDNNRegressor.predict_proba(x, batch_size=None)` {#TensorFlowDNNRegressor.predict_proba} + +Predict class probability of the input samples X. + +##### Args: + + +* `x`: array-like matrix, [n_samples, n_features...] or iterator. +* `batch_size`: If test set is too big, use batch size to split + it into mini batches. By default the batch_size member variable is used. + +##### Returns: + + +* `y`: array of shape [n_samples, n_classes]. The predicted + probabilities for each class. + + +- - - + +#### `tf.contrib.learn.TensorFlowDNNRegressor.restore(cls, path, config=None)` {#TensorFlowDNNRegressor.restore} + +Restores model from give path. + +##### Args: + + +* `path`: Path to the checkpoints and other model information. +* `config`: RunConfig object that controls the configurations of the session, + e.g. num_cores, gpu_memory_fraction, etc. This is allowed to be + reconfigured. + +##### Returns: + + Estimator, object of the subclass of TensorFlowEstimator. + + +- - - + +#### `tf.contrib.learn.TensorFlowDNNRegressor.save(path)` {#TensorFlowDNNRegressor.save} + +Saves checkpoints and graph to given path. + +##### Args: + + +* `path`: Folder to save model to. + + +- - - + +#### `tf.contrib.learn.TensorFlowDNNRegressor.set_params(**params)` {#TensorFlowDNNRegressor.set_params} + +Set the parameters of this estimator. + +The method works on simple estimators as well as on nested objects +(such as pipelines). The former have parameters of the form +``__`` so that it's possible to update each +component of a nested object. + +##### Returns: + + self + + +- - - + +#### `tf.contrib.learn.TensorFlowDNNRegressor.train(input_fn, steps, monitors=None)` {#TensorFlowDNNRegressor.train} + +Trains a model given input builder function. + +##### Args: + + +* `input_fn`: Input builder function, returns tuple of dicts or + dict and Tensor. +* `steps`: number of steps to train model for. +* `monitors`: List of `BaseMonitor` subclass instances. Used for callbacks + inside the training loop. + +##### Returns: + + Returns self. + + +- - - + +#### `tf.contrib.learn.TensorFlowDNNRegressor.weights_` {#TensorFlowDNNRegressor.weights_} + +Returns weights of the DNN weight layers. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.TensorFlowLinearRegressor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.TensorFlowLinearRegressor.md new file mode 100644 index 0000000000..6c793e1b90 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.TensorFlowLinearRegressor.md @@ -0,0 +1,279 @@ +TensorFlow Linear Regression model. +- - - + +#### `tf.contrib.learn.TensorFlowLinearRegressor.__init__(n_classes=0, batch_size=32, steps=200, optimizer='Adagrad', learning_rate=0.1, clip_gradients=5.0, continue_training=False, config=None, verbose=1)` {#TensorFlowLinearRegressor.__init__} + + + + +- - - + +#### `tf.contrib.learn.TensorFlowLinearRegressor.bias_` {#TensorFlowLinearRegressor.bias_} + +Returns bias of the linear regression. + + +- - - + +#### `tf.contrib.learn.TensorFlowLinearRegressor.evaluate(x=None, y=None, input_fn=None, steps=None)` {#TensorFlowLinearRegressor.evaluate} + +See base class. + + +- - - + +#### `tf.contrib.learn.TensorFlowLinearRegressor.fit(x, y, steps=None, monitors=None, logdir=None)` {#TensorFlowLinearRegressor.fit} + +Builds a neural network model given provided `model_fn` and training +data X and y. + +Note: called first time constructs the graph and initializers +variables. Consecutives times it will continue training the same model. +This logic follows partial_fit() interface in scikit-learn. + +To restart learning, create new estimator. + +##### Args: + + +* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be + iterator that returns arrays of features. The training input + samples for fitting the model. + +* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be + iterator that returns array of targets. The training target values + (class labels in classification, real numbers in regression). + +* `steps`: int, number of steps to train. + If None or 0, train for `self.steps`. +* `monitors`: List of `BaseMonitor` objects to print training progress and + invoke early stopping. +* `logdir`: the directory to save the log file that can be used for + optional visualization. + +##### Returns: + + Returns self. + + +- - - + +#### `tf.contrib.learn.TensorFlowLinearRegressor.get_params(deep=True)` {#TensorFlowLinearRegressor.get_params} + +Get parameters for this estimator. + +##### Args: + + +* `deep`: boolean, optional + If True, will return the parameters for this estimator and + contained subobjects that are estimators. + +##### Returns: + + params : mapping of string to any + Parameter names mapped to their values. + + +- - - + +#### `tf.contrib.learn.TensorFlowLinearRegressor.get_tensor(name)` {#TensorFlowLinearRegressor.get_tensor} + +Returns tensor by name. + +##### Args: + + +* `name`: string, name of the tensor. + +##### Returns: + + Tensor. + + +- - - + +#### `tf.contrib.learn.TensorFlowLinearRegressor.get_tensor_value(name)` {#TensorFlowLinearRegressor.get_tensor_value} + +Returns value of the tensor give by name. + +##### Args: + + +* `name`: string, name of the tensor. + +##### Returns: + + Numpy array - value of the tensor. + + +- - - + +#### `tf.contrib.learn.TensorFlowLinearRegressor.get_variable_names()` {#TensorFlowLinearRegressor.get_variable_names} + +Returns list of all variable names in this model. + +##### Returns: + + List of names. + + +- - - + +#### `tf.contrib.learn.TensorFlowLinearRegressor.model_dir` {#TensorFlowLinearRegressor.model_dir} + + + + +- - - + +#### `tf.contrib.learn.TensorFlowLinearRegressor.partial_fit(x, y)` {#TensorFlowLinearRegressor.partial_fit} + +Incremental fit on a batch of samples. + +This method is expected to be called several times consecutively +on different or the same chunks of the dataset. This either can +implement iterative training or out-of-core/online training. + +This is especially useful when the whole dataset is too big to +fit in memory at the same time. Or when model is taking long time +to converge, and you want to split up training into subparts. + +##### Args: + + +* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be + iterator that returns arrays of features. The training input + samples for fitting the model. + +* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be + iterator that returns array of targets. The training target values + (class label in classification, real numbers in regression). + +##### Returns: + + Returns self. + + +- - - + +#### `tf.contrib.learn.TensorFlowLinearRegressor.predict(x, axis=1, batch_size=None)` {#TensorFlowLinearRegressor.predict} + +Predict class or regression for X. + +For a classification model, the predicted class for each sample in X is +returned. For a regression model, the predicted value based on X is +returned. + +##### Args: + + +* `x`: array-like matrix, [n_samples, n_features...] or iterator. +* `axis`: Which axis to argmax for classification. + By default axis 1 (next after batch) is used. + Use 2 for sequence predictions. +* `batch_size`: If test set is too big, use batch size to split + it into mini batches. By default the batch_size member + variable is used. + +##### Returns: + + +* `y`: array of shape [n_samples]. The predicted classes or predicted + value. + + +- - - + +#### `tf.contrib.learn.TensorFlowLinearRegressor.predict_proba(x, batch_size=None)` {#TensorFlowLinearRegressor.predict_proba} + +Predict class probability of the input samples X. + +##### Args: + + +* `x`: array-like matrix, [n_samples, n_features...] or iterator. +* `batch_size`: If test set is too big, use batch size to split + it into mini batches. By default the batch_size member variable is used. + +##### Returns: + + +* `y`: array of shape [n_samples, n_classes]. The predicted + probabilities for each class. + + +- - - + +#### `tf.contrib.learn.TensorFlowLinearRegressor.restore(cls, path, config=None)` {#TensorFlowLinearRegressor.restore} + +Restores model from give path. + +##### Args: + + +* `path`: Path to the checkpoints and other model information. +* `config`: RunConfig object that controls the configurations of the session, + e.g. num_cores, gpu_memory_fraction, etc. This is allowed to be + reconfigured. + +##### Returns: + + Estimator, object of the subclass of TensorFlowEstimator. + + +- - - + +#### `tf.contrib.learn.TensorFlowLinearRegressor.save(path)` {#TensorFlowLinearRegressor.save} + +Saves checkpoints and graph to given path. + +##### Args: + + +* `path`: Folder to save model to. + + +- - - + +#### `tf.contrib.learn.TensorFlowLinearRegressor.set_params(**params)` {#TensorFlowLinearRegressor.set_params} + +Set the parameters of this estimator. + +The method works on simple estimators as well as on nested objects +(such as pipelines). The former have parameters of the form +``__`` so that it's possible to update each +component of a nested object. + +##### Returns: + + self + + +- - - + +#### `tf.contrib.learn.TensorFlowLinearRegressor.train(input_fn, steps, monitors=None)` {#TensorFlowLinearRegressor.train} + +Trains a model given input builder function. + +##### Args: + + +* `input_fn`: Input builder function, returns tuple of dicts or + dict and Tensor. +* `steps`: number of steps to train model for. +* `monitors`: List of `BaseMonitor` subclass instances. Used for callbacks + inside the training loop. + +##### Returns: + + Returns self. + + +- - - + +#### `tf.contrib.learn.TensorFlowLinearRegressor.weights_` {#TensorFlowLinearRegressor.weights_} + +Returns weights of the linear regression. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.TensorFlowRNNRegressor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.TensorFlowRNNRegressor.md new file mode 100644 index 0000000000..d23cd65402 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.TensorFlowRNNRegressor.md @@ -0,0 +1,312 @@ +TensorFlow RNN Regressor model. + +Parameters: + rnn_size: The size for rnn cell, e.g. size of your word embeddings. + cell_type: The type of rnn cell, including rnn, gru, and lstm. + num_layers: The number of layers of the rnn model. + input_op_fn: Function that will transform the input tensor, such as + creating word embeddings, byte list, etc. This takes + an argument X for input and returns transformed X. + bidirectional: boolean, Whether this is a bidirectional rnn. + sequence_length: If sequence_length is provided, dynamic calculation is + performed. This saves computational time when unrolling past max sequence + length. + initial_state: An initial state for the RNN. This must be a tensor of + appropriate type and shape [batch_size x cell.state_size]. + batch_size: Mini batch size. + steps: Number of steps to run over data. + optimizer: Optimizer name (or class), for example "SGD", "Adam", "Adagrad". + learning_rate: If this is constant float value, no decay function is + used. Instead, a customized decay function can be passed that accepts + global_step as parameter and returns a Tensor. + e.g. exponential decay function: + def exp_decay(global_step): + return tf.train.exponential_decay( + learning_rate=0.1, global_step, + decay_steps=2, decay_rate=0.001) + continue_training: when continue_training is True, once initialized + model will be continuely trained on every call of fit. + config: RunConfig object that controls the configurations of the + session, e.g. num_cores, gpu_memory_fraction, etc. + verbose: Controls the verbosity, possible values: + 0: the algorithm and debug information is muted. + 1: trainer prints the progress. + 2: log device placement is printed. +- - - + +#### `tf.contrib.learn.TensorFlowRNNRegressor.__init__(rnn_size, cell_type='gru', num_layers=1, input_op_fn=null_input_op_fn, initial_state=None, bidirectional=False, sequence_length=None, n_classes=0, batch_size=32, steps=50, optimizer='Adagrad', learning_rate=0.1, clip_gradients=5.0, continue_training=False, config=None, verbose=1)` {#TensorFlowRNNRegressor.__init__} + + + + +- - - + +#### `tf.contrib.learn.TensorFlowRNNRegressor.bias_` {#TensorFlowRNNRegressor.bias_} + +Returns bias of the rnn layer. + + +- - - + +#### `tf.contrib.learn.TensorFlowRNNRegressor.evaluate(x=None, y=None, input_fn=None, steps=None)` {#TensorFlowRNNRegressor.evaluate} + +See base class. + + +- - - + +#### `tf.contrib.learn.TensorFlowRNNRegressor.fit(x, y, steps=None, monitors=None, logdir=None)` {#TensorFlowRNNRegressor.fit} + +Builds a neural network model given provided `model_fn` and training +data X and y. + +Note: called first time constructs the graph and initializers +variables. Consecutives times it will continue training the same model. +This logic follows partial_fit() interface in scikit-learn. + +To restart learning, create new estimator. + +##### Args: + + +* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be + iterator that returns arrays of features. The training input + samples for fitting the model. + +* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be + iterator that returns array of targets. The training target values + (class labels in classification, real numbers in regression). + +* `steps`: int, number of steps to train. + If None or 0, train for `self.steps`. +* `monitors`: List of `BaseMonitor` objects to print training progress and + invoke early stopping. +* `logdir`: the directory to save the log file that can be used for + optional visualization. + +##### Returns: + + Returns self. + + +- - - + +#### `tf.contrib.learn.TensorFlowRNNRegressor.get_params(deep=True)` {#TensorFlowRNNRegressor.get_params} + +Get parameters for this estimator. + +##### Args: + + +* `deep`: boolean, optional + If True, will return the parameters for this estimator and + contained subobjects that are estimators. + +##### Returns: + + params : mapping of string to any + Parameter names mapped to their values. + + +- - - + +#### `tf.contrib.learn.TensorFlowRNNRegressor.get_tensor(name)` {#TensorFlowRNNRegressor.get_tensor} + +Returns tensor by name. + +##### Args: + + +* `name`: string, name of the tensor. + +##### Returns: + + Tensor. + + +- - - + +#### `tf.contrib.learn.TensorFlowRNNRegressor.get_tensor_value(name)` {#TensorFlowRNNRegressor.get_tensor_value} + +Returns value of the tensor give by name. + +##### Args: + + +* `name`: string, name of the tensor. + +##### Returns: + + Numpy array - value of the tensor. + + +- - - + +#### `tf.contrib.learn.TensorFlowRNNRegressor.get_variable_names()` {#TensorFlowRNNRegressor.get_variable_names} + +Returns list of all variable names in this model. + +##### Returns: + + List of names. + + +- - - + +#### `tf.contrib.learn.TensorFlowRNNRegressor.model_dir` {#TensorFlowRNNRegressor.model_dir} + + + + +- - - + +#### `tf.contrib.learn.TensorFlowRNNRegressor.partial_fit(x, y)` {#TensorFlowRNNRegressor.partial_fit} + +Incremental fit on a batch of samples. + +This method is expected to be called several times consecutively +on different or the same chunks of the dataset. This either can +implement iterative training or out-of-core/online training. + +This is especially useful when the whole dataset is too big to +fit in memory at the same time. Or when model is taking long time +to converge, and you want to split up training into subparts. + +##### Args: + + +* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be + iterator that returns arrays of features. The training input + samples for fitting the model. + +* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be + iterator that returns array of targets. The training target values + (class label in classification, real numbers in regression). + +##### Returns: + + Returns self. + + +- - - + +#### `tf.contrib.learn.TensorFlowRNNRegressor.predict(x, axis=1, batch_size=None)` {#TensorFlowRNNRegressor.predict} + +Predict class or regression for X. + +For a classification model, the predicted class for each sample in X is +returned. For a regression model, the predicted value based on X is +returned. + +##### Args: + + +* `x`: array-like matrix, [n_samples, n_features...] or iterator. +* `axis`: Which axis to argmax for classification. + By default axis 1 (next after batch) is used. + Use 2 for sequence predictions. +* `batch_size`: If test set is too big, use batch size to split + it into mini batches. By default the batch_size member + variable is used. + +##### Returns: + + +* `y`: array of shape [n_samples]. The predicted classes or predicted + value. + + +- - - + +#### `tf.contrib.learn.TensorFlowRNNRegressor.predict_proba(x, batch_size=None)` {#TensorFlowRNNRegressor.predict_proba} + +Predict class probability of the input samples X. + +##### Args: + + +* `x`: array-like matrix, [n_samples, n_features...] or iterator. +* `batch_size`: If test set is too big, use batch size to split + it into mini batches. By default the batch_size member variable is used. + +##### Returns: + + +* `y`: array of shape [n_samples, n_classes]. The predicted + probabilities for each class. + + +- - - + +#### `tf.contrib.learn.TensorFlowRNNRegressor.restore(cls, path, config=None)` {#TensorFlowRNNRegressor.restore} + +Restores model from give path. + +##### Args: + + +* `path`: Path to the checkpoints and other model information. +* `config`: RunConfig object that controls the configurations of the session, + e.g. num_cores, gpu_memory_fraction, etc. This is allowed to be + reconfigured. + +##### Returns: + + Estimator, object of the subclass of TensorFlowEstimator. + + +- - - + +#### `tf.contrib.learn.TensorFlowRNNRegressor.save(path)` {#TensorFlowRNNRegressor.save} + +Saves checkpoints and graph to given path. + +##### Args: + + +* `path`: Folder to save model to. + + +- - - + +#### `tf.contrib.learn.TensorFlowRNNRegressor.set_params(**params)` {#TensorFlowRNNRegressor.set_params} + +Set the parameters of this estimator. + +The method works on simple estimators as well as on nested objects +(such as pipelines). The former have parameters of the form +``__`` so that it's possible to update each +component of a nested object. + +##### Returns: + + self + + +- - - + +#### `tf.contrib.learn.TensorFlowRNNRegressor.train(input_fn, steps, monitors=None)` {#TensorFlowRNNRegressor.train} + +Trains a model given input builder function. + +##### Args: + + +* `input_fn`: Input builder function, returns tuple of dicts or + dict and Tensor. +* `steps`: number of steps to train model for. +* `monitors`: List of `BaseMonitor` subclass instances. Used for callbacks + inside the training loop. + +##### Returns: + + Returns self. + + +- - - + +#### `tf.contrib.learn.TensorFlowRNNRegressor.weights_` {#TensorFlowRNNRegressor.weights_} + +Returns weights of the rnn layer. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.TensorFlowRegressor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.TensorFlowRegressor.md deleted file mode 100644 index 169509f72f..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.TensorFlowRegressor.md +++ /dev/null @@ -1,279 +0,0 @@ -TensorFlow Linear Regression model. -- - - - -#### `tf.contrib.learn.TensorFlowRegressor.__init__(n_classes=0, batch_size=32, steps=200, optimizer='Adagrad', learning_rate=0.1, clip_gradients=5.0, continue_training=False, config=None, verbose=1)` {#TensorFlowRegressor.__init__} - - - - -- - - - -#### `tf.contrib.learn.TensorFlowRegressor.bias_` {#TensorFlowRegressor.bias_} - -Returns bias of the linear regression. - - -- - - - -#### `tf.contrib.learn.TensorFlowRegressor.evaluate(x=None, y=None, input_fn=None, steps=None)` {#TensorFlowRegressor.evaluate} - -See base class. - - -- - - - -#### `tf.contrib.learn.TensorFlowRegressor.fit(x, y, steps=None, monitors=None, logdir=None)` {#TensorFlowRegressor.fit} - -Builds a neural network model given provided `model_fn` and training -data X and y. - -Note: called first time constructs the graph and initializers -variables. Consecutives times it will continue training the same model. -This logic follows partial_fit() interface in scikit-learn. - -To restart learning, create new estimator. - -##### Args: - - -* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be - iterator that returns arrays of features. The training input - samples for fitting the model. - -* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be - iterator that returns array of targets. The training target values - (class labels in classification, real numbers in regression). - -* `steps`: int, number of steps to train. - If None or 0, train for `self.steps`. -* `monitors`: List of `BaseMonitor` objects to print training progress and - invoke early stopping. -* `logdir`: the directory to save the log file that can be used for - optional visualization. - -##### Returns: - - Returns self. - - -- - - - -#### `tf.contrib.learn.TensorFlowRegressor.get_params(deep=True)` {#TensorFlowRegressor.get_params} - -Get parameters for this estimator. - -##### Args: - - -* `deep`: boolean, optional - If True, will return the parameters for this estimator and - contained subobjects that are estimators. - -##### Returns: - - params : mapping of string to any - Parameter names mapped to their values. - - -- - - - -#### `tf.contrib.learn.TensorFlowRegressor.get_tensor(name)` {#TensorFlowRegressor.get_tensor} - -Returns tensor by name. - -##### Args: - - -* `name`: string, name of the tensor. - -##### Returns: - - Tensor. - - -- - - - -#### `tf.contrib.learn.TensorFlowRegressor.get_tensor_value(name)` {#TensorFlowRegressor.get_tensor_value} - -Returns value of the tensor give by name. - -##### Args: - - -* `name`: string, name of the tensor. - -##### Returns: - - Numpy array - value of the tensor. - - -- - - - -#### `tf.contrib.learn.TensorFlowRegressor.get_variable_names()` {#TensorFlowRegressor.get_variable_names} - -Returns list of all variable names in this model. - -##### Returns: - - List of names. - - -- - - - -#### `tf.contrib.learn.TensorFlowRegressor.model_dir` {#TensorFlowRegressor.model_dir} - - - - -- - - - -#### `tf.contrib.learn.TensorFlowRegressor.partial_fit(x, y)` {#TensorFlowRegressor.partial_fit} - -Incremental fit on a batch of samples. - -This method is expected to be called several times consecutively -on different or the same chunks of the dataset. This either can -implement iterative training or out-of-core/online training. - -This is especially useful when the whole dataset is too big to -fit in memory at the same time. Or when model is taking long time -to converge, and you want to split up training into subparts. - -##### Args: - - -* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be - iterator that returns arrays of features. The training input - samples for fitting the model. - -* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be - iterator that returns array of targets. The training target values - (class label in classification, real numbers in regression). - -##### Returns: - - Returns self. - - -- - - - -#### `tf.contrib.learn.TensorFlowRegressor.predict(x, axis=1, batch_size=None)` {#TensorFlowRegressor.predict} - -Predict class or regression for X. - -For a classification model, the predicted class for each sample in X is -returned. For a regression model, the predicted value based on X is -returned. - -##### Args: - - -* `x`: array-like matrix, [n_samples, n_features...] or iterator. -* `axis`: Which axis to argmax for classification. - By default axis 1 (next after batch) is used. - Use 2 for sequence predictions. -* `batch_size`: If test set is too big, use batch size to split - it into mini batches. By default the batch_size member - variable is used. - -##### Returns: - - -* `y`: array of shape [n_samples]. The predicted classes or predicted - value. - - -- - - - -#### `tf.contrib.learn.TensorFlowRegressor.predict_proba(x, batch_size=None)` {#TensorFlowRegressor.predict_proba} - -Predict class probability of the input samples X. - -##### Args: - - -* `x`: array-like matrix, [n_samples, n_features...] or iterator. -* `batch_size`: If test set is too big, use batch size to split - it into mini batches. By default the batch_size member variable is used. - -##### Returns: - - -* `y`: array of shape [n_samples, n_classes]. The predicted - probabilities for each class. - - -- - - - -#### `tf.contrib.learn.TensorFlowRegressor.restore(cls, path, config=None)` {#TensorFlowRegressor.restore} - -Restores model from give path. - -##### Args: - - -* `path`: Path to the checkpoints and other model information. -* `config`: RunConfig object that controls the configurations of the session, - e.g. num_cores, gpu_memory_fraction, etc. This is allowed to be - reconfigured. - -##### Returns: - - Estimator, object of the subclass of TensorFlowEstimator. - - -- - - - -#### `tf.contrib.learn.TensorFlowRegressor.save(path)` {#TensorFlowRegressor.save} - -Saves checkpoints and graph to given path. - -##### Args: - - -* `path`: Folder to save model to. - - -- - - - -#### `tf.contrib.learn.TensorFlowRegressor.set_params(**params)` {#TensorFlowRegressor.set_params} - -Set the parameters of this estimator. - -The method works on simple estimators as well as on nested objects -(such as pipelines). The former have parameters of the form -``__`` so that it's possible to update each -component of a nested object. - -##### Returns: - - self - - -- - - - -#### `tf.contrib.learn.TensorFlowRegressor.train(input_fn, steps, monitors=None)` {#TensorFlowRegressor.train} - -Trains a model given input builder function. - -##### Args: - - -* `input_fn`: Input builder function, returns tuple of dicts or - dict and Tensor. -* `steps`: number of steps to train model for. -* `monitors`: List of `BaseMonitor` subclass instances. Used for callbacks - inside the training loop. - -##### Returns: - - Returns self. - - -- - - - -#### `tf.contrib.learn.TensorFlowRegressor.weights_` {#TensorFlowRegressor.weights_} - -Returns weights of the linear regression. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.read_batch_features.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.read_batch_features.md deleted file mode 100644 index 75b40f7e75..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.read_batch_features.md +++ /dev/null @@ -1,43 +0,0 @@ -### `tf.contrib.learn.read_batch_features(file_pattern, batch_size, features, reader, randomize_input=True, num_epochs=None, queue_capacity=10000, reader_num_threads=1, parser_num_threads=1, name=None)` {#read_batch_features} - -Adds operations to read, queue, batch and parse `Example` protos. - -Given file pattern (or list of files), will setup a queue for file names, -read `Example` proto using provided `reader`, use batch queue to create -batches of examples of size `batch_size` and parse example given `features` -specification. - -All queue runners are added to the queue runners collection, and may be -started via `start_queue_runners`. - -All ops are added to the default graph. - -##### Args: - - -* `file_pattern`: List of files or pattern of file paths containing - `Example` records. See `tf.gfile.Glob` for pattern rules. -* `batch_size`: An int or scalar `Tensor` specifying the batch size to use. -* `features`: A `dict` mapping feature keys to `FixedLenFeature` or - `VarLenFeature` values. -* `reader`: A function or class that returns an object with - `read` method, (filename tensor) -> (example tensor). -* `randomize_input`: Whether the input should be randomized. -* `num_epochs`: Integer specifying the number of times to read through the - dataset. If None, cycles through the dataset forever. NOTE - If specified, - creates a variable that must be initialized, so call - tf.initialize_all_variables() as shown in the tests. -* `queue_capacity`: Capacity for input queue. -* `reader_num_threads`: The number of threads to read examples. -* `parser_num_threads`: The number of threads to parse examples. -* `name`: Name of resulting op. - -##### Returns: - - A dict of `Tensor` or `SparseTensor` objects for each in `features`. - -##### Raises: - - -* `ValueError`: for invalid inputs. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.run_n.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.run_n.md new file mode 100644 index 0000000000..8fa8f09cb5 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.run_n.md @@ -0,0 +1,19 @@ +### `tf.contrib.learn.run_n(output_dict, feed_dict=None, restore_checkpoint_path=None, n=1)` {#run_n} + +Run `output_dict` tensors `n` times, with the same `feed_dict` each run. + +##### Args: + + +* `output_dict`: A `dict` mapping string names to tensors to run. Must all be + from the same graph. +* `feed_dict`: `dict` of input values to feed each run. +* `restore_checkpoint_path`: A string containing the path to a checkpoint to + restore. +* `n`: Number of times to repeat. + +##### Returns: + + A list of `n` `dict` objects, each containing values read from `output_dict` + tensors. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.auc_using_histogram.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.auc_using_histogram.md new file mode 100644 index 0000000000..01f67e402c --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.auc_using_histogram.md @@ -0,0 +1,38 @@ +### `tf.contrib.metrics.auc_using_histogram(boolean_labels, scores, score_range, nbins=100, collections=None, check_shape=True, name=None)` {#auc_using_histogram} + +AUC computed by maintaining histograms. + +Rather than computing AUC directly, this Op maintains Variables containing +histograms of the scores associated with `True` and `False` labels. By +comparing these the AUC is generated, with some discretization error. +See: "Efficient AUC Learning Curve Calculation" by Bouckaert. + +This AUC Op updates in `O(batch_size + nbins)` time and works well even with +large class imbalance. The accuracy is limited by discretization error due +to finite number of bins. If scores are concentrated in a fewer bins, +accuracy is lower. If this is a concern, we recommend trying different +numbers of bins and comparing results. + +##### Args: + + +* `boolean_labels`: 1-D boolean `Tensor`. Entry is `True` if the corresponding + record is in class. +* `scores`: 1-D numeric `Tensor`, same shape as boolean_labels. +* `score_range`: `Tensor` of shape `[2]`, same dtype as `scores`. The min/max + values of score that we expect. Scores outside range will be clipped. +* `nbins`: Integer number of bins to use. Accuracy strictly increases as the + number of bins increases. +* `collections`: List of graph collections keys. Internal histogram Variables + are added to these collections. Defaults to `[GraphKeys.LOCAL_VARIABLES]`. +* `check_shape`: Boolean. If `True`, do a runtime shape check on the scores + and labels. +* `name`: A name for this Op. Defaults to "auc_using_histogram". + +##### Returns: + + +* `auc`: `float32` scalar `Tensor`. Fetching this converts internal histograms + to auc value. +* `update_op`: `Op`, when run, updates internal histograms. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.set_size.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.set_size.md new file mode 100644 index 0000000000..8f58261e7d --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.set_size.md @@ -0,0 +1,22 @@ +### `tf.contrib.metrics.set_size(a, validate_indices=True)` {#set_size} + +Compute number of unique elements along last dimension of `a`. + +##### Args: + + +* `a`: `SparseTensor`, with indices sorted in row-major order. +* `validate_indices`: Whether to validate the order and range of sparse indices + in `a`. + +##### Returns: + + For `a` ranked `n`, this is a `Tensor` with rank `n-1`, and the same 1st + `n-1` dimensions as `a`. Each value is the number of unique elements in + the corresponding `[0...n-1]` dimension of `a`. + +##### Raises: + + +* `TypeError`: If `a` is an invalid types. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.streaming_auc.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.streaming_auc.md deleted file mode 100644 index 2d444fac54..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.streaming_auc.md +++ /dev/null @@ -1,58 +0,0 @@ -### `tf.contrib.metrics.streaming_auc(predictions, labels, ignore_mask=None, num_thresholds=200, metrics_collections=None, updates_collections=None, name=None)` {#streaming_auc} - -Computes the approximate AUC via a Riemann sum. - -The `streaming_auc` function creates four local variables, `true_positives`, -`true_negatives`, `false_positives` and `false_negatives` that are used to -compute the AUC. To discretize the AUC curve, a linearly spaced set of -thresholds is used to compute pairs of recall and precision values. The area -under the curve is therefore computed using the height of the recall values -by the false positive rate. - -This value is ultimately returned as `auc`, an idempotent -operation the computes the area under a discretized curve of precision versus -recall values (computed using the afformentioned variables). The -`num_thresholds` variable controls the degree of discretization with larger -numbers of thresholds more closely approximating the true AUC. - -To faciliate the estimation of the AUC over a stream of data, the function -creates an `update_op` operation whose behavior is dependent on the value of -`ignore_mask`. If `ignore_mask` is None, then `update_op` increments the -`true_positives`, `true_negatives`, `false_positives` and `false_negatives` -counts with the number of each found in the current `predictions` and `labels` -`Tensors`. If `ignore_mask` is not `None`, then the increment is performed -using only the elements of `predictions` and `labels` whose corresponding -value in `ignore_mask` is `False`. In addition to performing the updates, -`update_op` also returns the `auc`. - -##### Args: - - -* `predictions`: A floating point `Tensor` of arbitrary shape and whose values - are in the range `[0, 1]`. -* `labels`: A binary `Tensor` whose shape matches `predictions`. -* `ignore_mask`: An optional, binary tensor whose size matches `predictions`. -* `num_thresholds`: The number of thresholds to use when discretizing the roc - curve. -* `metrics_collections`: An optional list of collections that `auc` should be - added to. -* `updates_collections`: An optional list of collections that `update_op` should - be added to. -* `name`: An optional variable_op_scope name. - -##### Returns: - - -* `auc`: A scalar tensor representing the current area-under-curve. -* `update_op`: An operation that increments the `true_positives`, - `true_negatives`, `false_positives` and `false_negatives` variables - appropriately and whose value matches `auc`. - -##### Raises: - - -* `ValueError`: If the shape of `predictions` and `labels` do not match or if - `ignore_mask` is not `None` and its shape doesn't match `predictions` or - if either `metrics_collections` or `updates_collections` are not a list or - tuple. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.streaming_percentage_less.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.streaming_percentage_less.md new file mode 100644 index 0000000000..40ddae4d31 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.streaming_percentage_less.md @@ -0,0 +1,47 @@ +### `tf.contrib.metrics.streaming_percentage_less(values, threshold, ignore_mask=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_percentage_less} + +Computes the percentage of values less than the given threshold. + +The `streaming_percentage_less` function creates two local variables, +`total` and `count` that are used to compute the percentage of `values` that +fall below `threshold`. This rate is ultimately returned as `percentage` +which is an idempotent operation that simply divides `total` by `count. +To facilitate the estimation of the percentage of values that fall under +`threshold` over multiple batches of data, the function creates an +`update_op` operation whose behavior is dependent on the value of +`ignore_mask`. If `ignore_mask` is None, then `update_op` +increments `total` with the number of elements of `values` that are less +than `threshold` and `count` with the number of elements in `values`. If +`ignore_mask` is not `None`, then `update_op` increments `total` with the +number of elements of `values` that are less than `threshold` and whose +corresponding entries in `ignore_mask` are False, and `count` is incremented +with the number of elements of `ignore_mask` that are False. + +##### Args: + + +* `values`: A numeric `Tensor` of arbitrary size. +* `threshold`: A scalar threshold. +* `ignore_mask`: An optional mask of the same shape as 'values' which indicates + which elements to ignore during metric computation. +* `metrics_collections`: An optional list of collections that the metric + value variable should be added to. +* `updates_collections`: An optional list of collections that the metric update + ops should be added to. +* `name`: An optional variable_op_scope name. + +##### Returns: + + +* `percentage`: A tensor representing the current mean, the value of `total` + divided by `count`. +* `update_op`: An operation that increments the `total` and `count` variables + appropriately. + +##### Raises: + + +* `ValueError`: If `ignore_mask` is not None and its shape doesn't match `values + or if either `metrics_collections` or `updates_collections` are supplied + but are not a list or tuple. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.streaming_root_mean_squared_error.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.streaming_root_mean_squared_error.md deleted file mode 100644 index 85319f44dd..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.streaming_root_mean_squared_error.md +++ /dev/null @@ -1,48 +0,0 @@ -### `tf.contrib.metrics.streaming_root_mean_squared_error(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_root_mean_squared_error} - -Computes the root mean squared error between the labels and predictions. - -The `streaming_root_mean_squared_error` function creates two local variables, -`total` and `count` that are used to compute the root mean squared error. -This average is ultimately returned as `root_mean_squared_error`: an -idempotent operation that takes the square root of the division of `total` -by `count`. To facilitate the estimation of the root mean squared error over a -stream of data, the function utilizes two operations. First, a `squared_error` -operation computes the element-wise square of the difference between -`predictions` and `labels`. Second, an `update_op` operation whose behavior is -dependent on the value of `weights`. If `weights` is None, then `update_op` -increments `total` with the reduced sum of `squared_error` and increments -`count` with the number of elements in `squared_error`. If `weights` is not -`None`, then `update_op` increments `total` with the reduced sum of the -product of `weights` and `squared_error` and increments `count` with the -reduced sum of `weights`. In addition to performing the updates, `update_op` -also returns the `root_mean_squared_error` value. - -##### Args: - - -* `predictions`: A `Tensor` of arbitrary shape. -* `labels`: A `Tensor` of the same shape as `predictions`. -* `weights`: An optional set of weights of the same shape as `predictions`. If - `weights` is not None, the function computes a weighted mean. -* `metrics_collections`: An optional list of collections that - `root_mean_squared_error` should be added to. -* `updates_collections`: An optional list of collections that `update_op` should - be added to. -* `name`: An optional variable_op_scope name. - -##### Returns: - - -* `root_mean_squared_error`: A tensor representing the current mean, the value - of `total` divided by `count`. -* `update_op`: An operation that increments the `total` and `count` variables - appropriately and whose value matches `root_mean_squared_error`. - -##### Raises: - - -* `ValueError`: If `weights` is not `None` and its shape doesn't match - `predictions` or if either `metrics_collections` or `updates_collections` - are not a list or tuple. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.util.make_ndarray.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.util.make_ndarray.md new file mode 100644 index 0000000000..7b2a81d48e --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.util.make_ndarray.md @@ -0,0 +1,20 @@ +### `tf.contrib.util.make_ndarray(tensor)` {#make_ndarray} + +Create a numpy ndarray from a tensor. + +Create a numpy ndarray with the same shape and data as the tensor. + +##### Args: + + +* `tensor`: A TensorProto. + +##### Returns: + + A numpy array with the tensor contents. + +##### Raises: + + +* `TypeError`: if tensor has unsupported type. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.util.make_tensor_proto.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.util.make_tensor_proto.md new file mode 100644 index 0000000000..f84a59be49 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.util.make_tensor_proto.md @@ -0,0 +1,44 @@ +### `tf.contrib.util.make_tensor_proto(values, dtype=None, shape=None)` {#make_tensor_proto} + +Create a TensorProto. + +##### Args: + + +* `values`: Values to put in the TensorProto. +* `dtype`: Optional tensor_pb2 DataType value. +* `shape`: List of integers representing the dimensions of tensor. + +##### Returns: + + A TensorProto. Depending on the type, it may contain data in the + "tensor_content" attribute, which is not directly useful to Python programs. + To access the values you should convert the proto back to a numpy ndarray + with tensor_util.MakeNdarray(proto). + +##### Raises: + + +* `TypeError`: if unsupported types are provided. +* `ValueError`: if arguments have inappropriate values. + +make_tensor_proto accepts "values" of a python scalar, a python list, a +numpy ndarray, or a numpy scalar. + +If "values" is a python scalar or a python list, make_tensor_proto +first convert it to numpy ndarray. If dtype is None, the +conversion tries its best to infer the right numpy data +type. Otherwise, the resulting numpy array has a compatible data +type with the given dtype. + +In either case above, the numpy ndarray (either the caller provided +or the auto converted) must have the compatible type with dtype. + +make_tensor_proto then converts the numpy array to a tensor proto. + +If "shape" is None, the resulting tensor proto represents the numpy +array precisely. + +Otherwise, "shape" specifies the tensor's shape and the numpy array +can not have more elements than what "shape" specifies. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.util.stripped_op_list_for_graph.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.util.stripped_op_list_for_graph.md deleted file mode 100644 index 23bfb28542..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.util.stripped_op_list_for_graph.md +++ /dev/null @@ -1,23 +0,0 @@ -### `tf.contrib.util.stripped_op_list_for_graph(graph_def)` {#stripped_op_list_for_graph} - -Collect the stripped OpDefs for ops used by a graph. - -This function computes the `stripped_op_list` field of `MetaGraphDef` and -similar protos. The result can be communicated from the producer to the -consumer, which can then use the C++ function -`RemoveNewDefaultAttrsFromGraphDef` to improve forwards compatibility. - -##### Args: - - -* `graph_def`: A `GraphDef` proto, as from `graph.as_graph_def()`. - -##### Returns: - - An `OpList` of ops used by the graph. - -##### Raises: - - -* `ValueError`: If an unregistered op is used. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.control_dependencies.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.control_dependencies.md deleted file mode 100644 index 070f8788e5..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.control_dependencies.md +++ /dev/null @@ -1,20 +0,0 @@ -### `tf.control_dependencies(control_inputs)` {#control_dependencies} - -Wrapper for `Graph.control_dependencies()` using the default graph. - -See [`Graph.control_dependencies()`](../../api_docs/python/framework.md#Graph.control_dependencies) -for more details. - -##### Args: - - -* `control_inputs`: A list of `Operation` or `Tensor` objects which - must be executed or computed before running the operations - defined in the context. Can also be `None` to clear the control - dependencies. - -##### Returns: - - A context manager that specifies control dependencies for all - operations constructed within the context. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.diag_part.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.diag_part.md new file mode 100644 index 0000000000..249eb80e50 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.diag_part.md @@ -0,0 +1,34 @@ +### `tf.diag_part(input, name=None)` {#diag_part} + +Returns the diagonal part of the tensor. + +This operation returns a tensor with the `diagonal` part +of the `input`. The `diagonal` part is computed as follows: + +Assume `input` has dimensions `[D1,..., Dk, D1,..., Dk]`, then the output is a +tensor of rank `k` with dimensions `[D1,..., Dk]` where: + +`diagonal[i1,..., ik] = input[i1, ..., ik, i1,..., ik]`. + +For example: + +```prettyprint +# 'input' is [[1, 0, 0, 0] + [0, 2, 0, 0] + [0, 0, 3, 0] + [0, 0, 0, 4]] + +tf.diag_part(input) ==> [1, 2, 3, 4] +``` + +##### Args: + + +* `input`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `complex64`. + Rank k tensor where k is 2, 4, or 6. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `input`. The extracted diagonal. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.digamma.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.digamma.md deleted file mode 100644 index 5af2d11062..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.digamma.md +++ /dev/null @@ -1,16 +0,0 @@ -### `tf.digamma(x, name=None)` {#digamma} - -Computes Psi, the derivative of Lgamma (the log of the absolute value of - -`Gamma(x)`), element-wise. - -##### Args: - - -* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `x`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.edit_distance.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.edit_distance.md deleted file mode 100644 index e5f6471817..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.edit_distance.md +++ /dev/null @@ -1,65 +0,0 @@ -### `tf.edit_distance(hypothesis, truth, normalize=True, name='edit_distance')` {#edit_distance} - -Computes the Levenshtein distance between sequences. - -This operation takes variable-length sequences (`hypothesis` and `truth`), -each provided as a `SparseTensor`, and computes the Levenshtein distance. -You can normalize the edit distance by length of `truth` by setting -`normalize` to true. - -For example, given the following input: - -```python -# 'hypothesis' is a tensor of shape `[2, 1]` with variable-length values: -# (0,0) = ["a"] -# (1,0) = ["b"] -hypothesis = tf.SparseTensor( - [[0, 0, 0], - [1, 0, 0]], - ["a", "b"] - (2, 1, 1)) - -# 'truth' is a tensor of shape `[2, 2]` with variable-length values: -# (0,0) = [] -# (0,1) = ["a"] -# (1,0) = ["b", "c"] -# (1,1) = ["a"] -truth = tf.SparseTensor( - [[0, 1, 0], - [1, 0, 0], - [1, 0, 1], - [1, 1, 0]] - ["a", "b", "c", "a"], - (2, 2, 2)) - -normalize = True -``` - -This operation would return the following: - -```python -# 'output' is a tensor of shape `[2, 2]` with edit distances normalized -# by 'truth' lengths. -output ==> [[inf, 1.0], # (0,0): no truth, (0,1): no hypothesis - [0.5, 1.0]] # (1,0): addition, (1,1): no hypothesis -``` - -##### Args: - - -* `hypothesis`: A `SparseTensor` containing hypothesis sequences. -* `truth`: A `SparseTensor` containing truth sequences. -* `normalize`: A `bool`. If `True`, normalizes the Levenshtein distance by - length of `truth.` -* `name`: A name for the operation (optional). - -##### Returns: - - A dense `Tensor` with rank `R - 1`, where R is the rank of the - `SparseTensor` inputs `hypothesis` and `truth`. - -##### Raises: - - -* `TypeError`: If either `hypothesis` or `truth` are not a `SparseTensor`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.equal.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.equal.md deleted file mode 100644 index 998db9189f..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.equal.md +++ /dev/null @@ -1,15 +0,0 @@ -### `tf.equal(x, y, name=None)` {#equal} - -Returns the truth value of (x == y) element-wise. - -##### Args: - - -* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`, `complex64`, `quint8`, `qint8`, `qint32`, `string`, `bool`, `complex128`. -* `y`: A `Tensor`. Must have the same type as `x`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `bool`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.errors.DataLossError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.errors.DataLossError.md new file mode 100644 index 0000000000..3193e77ae3 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.errors.DataLossError.md @@ -0,0 +1,13 @@ +Raised when unrecoverable data loss or corruption is encountered. + +For example, this may be raised by running a +[`tf.WholeFileReader.read()`](../../api_docs/python/io_ops.md#WholeFileReader) +operation, if the file is truncated while it is being read. + +- - - + +#### `tf.errors.DataLossError.__init__(node_def, op, message)` {#DataLossError.__init__} + +Creates a `DataLossError`. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.errors.NotFoundError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.errors.NotFoundError.md new file mode 100644 index 0000000000..49fec3c55c --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.errors.NotFoundError.md @@ -0,0 +1,14 @@ +Raised when a requested entity (e.g., a file or directory) was not found. + +For example, running the +[`tf.WholeFileReader.read()`](../../api_docs/python/io_ops.md#WholeFileReader) +operation could raise `NotFoundError` if it receives the name of a file that +does not exist. + +- - - + +#### `tf.errors.NotFoundError.__init__(node_def, op, message)` {#NotFoundError.__init__} + +Creates a `NotFoundError`. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.errors.ResourceExhaustedError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.errors.ResourceExhaustedError.md deleted file mode 100644 index a01e255be5..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.errors.ResourceExhaustedError.md +++ /dev/null @@ -1,12 +0,0 @@ -Some resource has been exhausted. - -For example, this error might be raised if a per-user quota is -exhausted, or perhaps the entire file system is out of space. - -- - - - -#### `tf.errors.ResourceExhaustedError.__init__(node_def, op, message)` {#ResourceExhaustedError.__init__} - -Creates a `ResourceExhaustedError`. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.expand_dims.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.expand_dims.md deleted file mode 100644 index a188cda506..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.expand_dims.md +++ /dev/null @@ -1,50 +0,0 @@ -### `tf.expand_dims(input, dim, name=None)` {#expand_dims} - -Inserts a dimension of 1 into a tensor's shape. - -Given a tensor `input`, this operation inserts a dimension of 1 at the -dimension index `dim` of `input`'s shape. The dimension index `dim` starts at -zero; if you specify a negative number for `dim` it is counted backward from -the end. - -This operation is useful if you want to add a batch dimension to a single -element. For example, if you have a single image of shape `[height, width, -channels]`, you can make it a batch of 1 image with `expand_dims(image, 0)`, -which will make the shape `[1, height, width, channels]`. - -Other examples: - -```prettyprint -# 't' is a tensor of shape [2] -shape(expand_dims(t, 0)) ==> [1, 2] -shape(expand_dims(t, 1)) ==> [2, 1] -shape(expand_dims(t, -1)) ==> [2, 1] - -# 't2' is a tensor of shape [2, 3, 5] -shape(expand_dims(t2, 0)) ==> [1, 2, 3, 5] -shape(expand_dims(t2, 2)) ==> [2, 3, 1, 5] -shape(expand_dims(t2, 3)) ==> [2, 3, 5, 1] -``` - -This operation requires that: - -`-1-input.dims() <= dim <= input.dims()` - -This operation is related to `squeeze()`, which removes dimensions of -size 1. - -##### Args: - - -* `input`: A `Tensor`. -* `dim`: A `Tensor` of type `int32`. - 0-D (scalar). Specifies the dimension index at which to - expand the shape of `input`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `input`. - Contains the same data as `input`, but its shape has an additional - dimension of size 1 added. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.fft.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.fft.md deleted file mode 100644 index 5a2c3c635d..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.fft.md +++ /dev/null @@ -1,14 +0,0 @@ -### `tf.fft(input, name=None)` {#fft} - -Compute the 1-dimensional discrete Fourier Transform. - -##### Args: - - -* `input`: A `Tensor` of type `complex64`. A complex64 vector. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `complex64`. The 1D Fourier Transform of `input`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.fill.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.fill.md deleted file mode 100644 index b6e51fa634..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.fill.md +++ /dev/null @@ -1,26 +0,0 @@ -### `tf.fill(dims, value, name=None)` {#fill} - -Creates a tensor filled with a scalar value. - -This operation creates a tensor of shape `dims` and fills it with `value`. - -For example: - -```prettyprint -# Output tensor has shape [2, 3]. -fill([2, 3], 9) ==> [[9, 9, 9] - [9, 9, 9]] -``` - -##### Args: - - -* `dims`: A `Tensor` of type `int32`. - 1-D. Represents the shape of the output tensor. -* `value`: A `Tensor`. 0-D (scalar). Value to fill the returned tensor. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `value`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.foldl.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.foldl.md new file mode 100644 index 0000000000..dac4268165 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.foldl.md @@ -0,0 +1,44 @@ +### `tf.foldl(fn, elems, initializer=None, parallel_iterations=10, back_prop=True, swap_memory=False, name=None)` {#foldl} + +foldl on the list of tensors unpacked from `elems` on dimension 0. + +This foldl operator repeatedly applies the callable `fn` to a sequence +of elements from first to last. The elements are made of the tensors +unpacked from `elems` on dimension 0. The callable fn takes two tensors as +arguments. The first argument is the accumulated value computed from the +preceding invocation of fn. If `initializer` is None, `elems` must contain +at least one element, and its first element is used as the initializer. + +Suppose that `elems` is unpacked into `values`, a list of tensors. The shape +of the result tensor is fn(initializer, values[0]).shape`. + +##### Args: + + +* `fn`: The callable to be performed. +* `elems`: A tensor to be unpacked on dimension 0. +* `initializer`: (optional) The initial value for the accumulator. +* `parallel_iterations`: (optional) The number of iterations allowed to run + in parallel. +* `back_prop`: (optional) True enables back propagation. +* `swap_memory`: (optional) True enables GPU-CPU memory swapping. +* `name`: (optional) Name prefix for the returned tensors. + +##### Returns: + + A tensor resulting from applying `fn` consecutively to the list of tensors + unpacked from `elems`, from first to last. + +##### Raises: + + +* `TypeError`: if `fn` is not callable. + +##### Example: + + ```python + elems = [1, 2, 3, 4, 5, 6] + sum = foldl(lambda a, x: a + x, elems) + # sum == 21 + ``` + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.foldr.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.foldr.md new file mode 100644 index 0000000000..0a75190c04 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.foldr.md @@ -0,0 +1,44 @@ +### `tf.foldr(fn, elems, initializer=None, parallel_iterations=10, back_prop=True, swap_memory=False, name=None)` {#foldr} + +foldr on the list of tensors unpacked from `elems` on dimension 0. + +This foldr operator repeatedly applies the callable `fn` to a sequence +of elements from last to first. The elements are made of the tensors +unpacked from `elems`. The callable fn takes two tensors as arguments. +The first argument is the accumulated value computed from the preceding +invocation of fn. If `initializer` is None, `elems` must contain at least +one element, and its first element is used as the initializer. + +Suppose that `elems` is unpacked into `values`, a list of tensors. The shape +of the result tensor is `fn(initializer, values[0]).shape`. + +##### Args: + + +* `fn`: The callable to be performed. +* `elems`: A tensor that is unpacked into a sequence of tensors to apply `fn`. +* `initializer`: (optional) The initial value for the accumulator. +* `parallel_iterations`: (optional) The number of iterations allowed to run + in parallel. +* `back_prop`: (optional) True enables back propagation. +* `swap_memory`: (optional) True enables GPU-CPU memory swapping. +* `name`: (optional) Name prefix for the returned tensors. + +##### Returns: + + A tensor resulting from applying `fn` consecutively to the list of tensors + unpacked from `elems`, from last to first. + +##### Raises: + + +* `TypeError`: if `fn` is not callable. + +##### Example: + + ```python + elems = [1, 2, 3, 4, 5, 6] + sum = foldr(lambda a, x: a + x, elems) + # sum == 21 + ``` + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.get_collection.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.get_collection.md deleted file mode 100644 index fc0044b490..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.get_collection.md +++ /dev/null @@ -1,25 +0,0 @@ -### `tf.get_collection(key, scope=None)` {#get_collection} - -Wrapper for `Graph.get_collection()` using the default graph. - -See [`Graph.get_collection()`](../../api_docs/python/framework.md#Graph.get_collection) -for more details. - -##### Args: - - -* `key`: The key for the collection. For example, the `GraphKeys` class - contains many standard names for collections. -* `scope`: (Optional.) If supplied, the resulting list is filtered to include - only items whose `name` attribute matches using `re.match`. Items - without a `name` attribute are never returned if a scope is supplied and - the choice or `re.match` means that a `scope` without special tokens - filters by prefix. - -##### Returns: - - The list of values in the collection with the given `name`, or - an empty list if no value has been added to that collection. The - list contains the values in the order under which they were - collected. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.get_variable_scope.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.get_variable_scope.md new file mode 100644 index 0000000000..4a0d3bc775 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.get_variable_scope.md @@ -0,0 +1,4 @@ +### `tf.get_variable_scope()` {#get_variable_scope} + +Returns the current variable scope. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.greater_equal.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.greater_equal.md new file mode 100644 index 0000000000..9d68429c36 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.greater_equal.md @@ -0,0 +1,15 @@ +### `tf.greater_equal(x, y, name=None)` {#greater_equal} + +Returns the truth value of (x >= y) element-wise. + +##### Args: + + +* `x`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`. +* `y`: A `Tensor`. Must have the same type as `x`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `bool`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.histogram_fixed_width.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.histogram_fixed_width.md deleted file mode 100644 index 4a2997103b..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.histogram_fixed_width.md +++ /dev/null @@ -1,38 +0,0 @@ -### `tf.histogram_fixed_width(values, value_range, nbins=100, dtype=tf.int32, name=None)` {#histogram_fixed_width} - -Return histogram of values. - -Given the tensor `values`, this operation returns a rank 1 histogram counting -the number of entries in `values` that fell into every bin. The bins are -equal width and determined by the arguments `value_range` and `nbins`. - -##### Args: - - -* `values`: Numeric `Tensor`. -* `value_range`: Shape [2] `Tensor`. new_values <= value_range[0] will be - mapped to hist[0], values >= value_range[1] will be mapped to hist[-1]. - Must be same dtype as new_values. -* `nbins`: Scalar `int32 Tensor`. Number of histogram bins. -* `dtype`: dtype for returned histogram. -* `name`: A name for this operation (defaults to 'histogram_fixed_width'). - -##### Returns: - - A 1-D `Tensor` holding histogram of values. - - -* `Examples`: - -```python -# Bins will be: (-inf, 1), [1, 2), [2, 3), [3, 4), [4, inf) -nbins = 5 -value_range = [0.0, 5.0] -new_values = [-1.0, 0.0, 1.5, 2.0, 5.0, 15] - -with tf.default_session() as sess: - hist = tf.histogram_fixed_width(new_values, value_range, nbins=5) - variables.initialize_all_variables().run() - sess.run(hist) => [2, 1, 1, 0, 2] -``` - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.igamma.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.igamma.md new file mode 100644 index 0000000000..1cf6860651 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.igamma.md @@ -0,0 +1,29 @@ +### `tf.igamma(a, x, name=None)` {#igamma} + +Compute the lower regularized incomplete Gamma function `Q(a, x)`. + +The lower regularized incomplete Gamma function is defined as: + +``` +P(a, x) = gamma(a, x) / Gamma(x) = 1 - Q(a, x) +``` +where +``` +gamma(a, x) = int_{0}^{x} t^{a-1} exp(-t) dt +``` +is the lower incomplete Gamma function. + +Note, above `Q(a, x)` (`Igammac`) is the upper regularized complete +Gamma function. + +##### Args: + + +* `a`: A `Tensor`. Must be one of the following types: `float32`, `float64`. +* `x`: A `Tensor`. Must have the same type as `a`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `a`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.image.adjust_brightness.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.image.adjust_brightness.md deleted file mode 100644 index 7743f0180c..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.image.adjust_brightness.md +++ /dev/null @@ -1,25 +0,0 @@ -### `tf.image.adjust_brightness(image, delta)` {#adjust_brightness} - -Adjust the brightness of RGB or Grayscale images. - -This is a convenience method that converts an RGB image to float -representation, adjusts its brightness, and then converts it back to the -original data type. If several adjustments are chained it is advisable to -minimize the number of redundant conversions. - -The value `delta` is added to all components of the tensor `image`. Both -`image` and `delta` are converted to `float` before adding (and `image` is -scaled appropriately if it is in fixed-point representation). For regular -images, `delta` should be in the range `[0,1)`, as it is added to the image in -floating point representation, where pixel values are in the `[0,1)` range. - -##### Args: - - -* `image`: A tensor. -* `delta`: A scalar. Amount to add to the pixel values. - -##### Returns: - - A brightness-adjusted tensor of the same shape and type as `image`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.image.draw_bounding_boxes.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.image.draw_bounding_boxes.md deleted file mode 100644 index 0e1c6115c7..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.image.draw_bounding_boxes.md +++ /dev/null @@ -1,32 +0,0 @@ -### `tf.image.draw_bounding_boxes(images, boxes, name=None)` {#draw_bounding_boxes} - -Draw bounding boxes on a batch of images. - -Outputs a copy of `images` but draws on top of the pixels zero or more bounding -boxes specified by the locations in `boxes`. The coordinates of the each -bounding box in `boxes are encoded as `[y_min, x_min, y_max, x_max]`. The -bounding box coordinates are floats in `[0.0, 1.0]` relative to the width and -height of the underlying image. - -For example, if an image is 100 x 200 pixels and the bounding box is -`[0.1, 0.5, 0.2, 0.9]`, the bottom-left and upper-right coordinates of the -bounding box will be `(10, 40)` to `(50, 180)`. - -Parts of the bounding box may fall outside the image. - -##### Args: - - -* `images`: A `Tensor`. Must be one of the following types: `float32`, `half`. - 4-D with shape `[batch, height, width, depth]`. A batch of images. -* `boxes`: A `Tensor` of type `float32`. - 3-D with shape `[batch, num_bounding_boxes, 4]` containing bounding - boxes. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `images`. - 4-D with the same shape as `images`. The batch of input images with - bounding boxes drawn on the images. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.image.grayscale_to_rgb.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.image.grayscale_to_rgb.md new file mode 100644 index 0000000000..755b66141b --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.image.grayscale_to_rgb.md @@ -0,0 +1,17 @@ +### `tf.image.grayscale_to_rgb(images, name=None)` {#grayscale_to_rgb} + +Converts one or more images from Grayscale to RGB. + +Outputs a tensor of the same `DType` and rank as `images`. The size of the +last dimension of the output is 3, containing the RGB value of the pixels. + +##### Args: + + +* `images`: The Grayscale tensor to convert. Last dimension must be size 1. +* `name`: A name for the operation (optional). + +##### Returns: + + The converted grayscale image(s). + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.image.random_flip_left_right.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.image.random_flip_left_right.md deleted file mode 100644 index d063895136..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.image.random_flip_left_right.md +++ /dev/null @@ -1,24 +0,0 @@ -### `tf.image.random_flip_left_right(image, seed=None)` {#random_flip_left_right} - -Randomly flip an image horizontally (left to right). - -With a 1 in 2 chance, outputs the contents of `image` flipped along the -second dimension, which is `width`. Otherwise output the image as-is. - -##### Args: - - -* `image`: A 3-D tensor of shape `[height, width, channels].` -* `seed`: A Python integer. Used to create a random seed. See - [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) - for behavior. - -##### Returns: - - A 3-D tensor of the same type and shape as `image`. - -##### Raises: - - -* `ValueError`: if the shape of `image` not supported. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.image.random_hue.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.image.random_hue.md deleted file mode 100644 index 09a4ebc17f..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.image.random_hue.md +++ /dev/null @@ -1,28 +0,0 @@ -### `tf.image.random_hue(image, max_delta, seed=None)` {#random_hue} - -Adjust the hue of an RGB image by a random factor. - -Equivalent to `adjust_hue()` but uses a `delta` randomly -picked in the interval `[-max_delta, max_delta]`. - -`max_delta` must be in the interval `[0, 0.5]`. - -##### Args: - - -* `image`: RGB image or images. Size of the last dimension must be 3. -* `max_delta`: float. Maximum value for the random delta. -* `seed`: An operation-specific seed. It will be used in conjunction - with the graph-level seed to determine the real seeds that will be - used in this operation. Please see the documentation of - set_random_seed for its interaction with the graph-level random seed. - -##### Returns: - - 3-D float tensor of shape `[height, width, channels]`. - -##### Raises: - - -* `ValueError`: if `max_delta` is invalid. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.image.random_saturation.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.image.random_saturation.md new file mode 100644 index 0000000000..397bfc4d0b --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.image.random_saturation.md @@ -0,0 +1,27 @@ +### `tf.image.random_saturation(image, lower, upper, seed=None)` {#random_saturation} + +Adjust the saturation of an RGB image by a random factor. + +Equivalent to `adjust_saturation()` but uses a `saturation_factor` randomly +picked in the interval `[lower, upper]`. + +##### Args: + + +* `image`: RGB image or images. Size of the last dimension must be 3. +* `lower`: float. Lower bound for the random saturation factor. +* `upper`: float. Upper bound for the random saturation factor. +* `seed`: An operation-specific seed. It will be used in conjunction + with the graph-level seed to determine the real seeds that will be + used in this operation. Please see the documentation of + set_random_seed for its interaction with the graph-level random seed. + +##### Returns: + + Adjusted image(s), same shape and DType as `image`. + +##### Raises: + + +* `ValueError`: if `upper <= lower` or if `lower < 0`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.is_finite.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.is_finite.md deleted file mode 100644 index db038e9919..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.is_finite.md +++ /dev/null @@ -1,14 +0,0 @@ -### `tf.is_finite(x, name=None)` {#is_finite} - -Returns which elements of x are finite. - -##### Args: - - -* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `bool`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.is_inf.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.is_inf.md new file mode 100644 index 0000000000..8955d5c9cc --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.is_inf.md @@ -0,0 +1,14 @@ +### `tf.is_inf(x, name=None)` {#is_inf} + +Returns which elements of x are Inf. + +##### Args: + + +* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `bool`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.is_numeric_tensor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.is_numeric_tensor.md deleted file mode 100644 index c2e61b856d..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.is_numeric_tensor.md +++ /dev/null @@ -1,4 +0,0 @@ -### `tf.is_numeric_tensor(tensor)` {#is_numeric_tensor} - - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.is_strictly_increasing.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.is_strictly_increasing.md deleted file mode 100644 index bdaedd519e..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.is_strictly_increasing.md +++ /dev/null @@ -1,26 +0,0 @@ -### `tf.is_strictly_increasing(x, name=None)` {#is_strictly_increasing} - -Returns `True` if `x` is strictly increasing. - -Elements of `x` are compared in row-major order. The tensor `[x[0],...]` -is strictly increasing if for every adjacent pair we have `x[i] < x[i+1]`. -If `x` has less than two elements, it is trivially strictly increasing. - -See also: `is_non_decreasing` - -##### Args: - - -* `x`: Numeric `Tensor`. -* `name`: A name for this operation (optional). - Defaults to "is_strictly_increasing" - -##### Returns: - - Boolean `Tensor`, equal to `True` iff `x` is strictly increasing. - -##### Raises: - - -* `TypeError`: if `x` is not a numeric tensor. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.is_variable_initialized.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.is_variable_initialized.md deleted file mode 100644 index d8383439ab..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.is_variable_initialized.md +++ /dev/null @@ -1,14 +0,0 @@ -### `tf.is_variable_initialized(variable)` {#is_variable_initialized} - -Tests if a variable has been initialized. - -##### Args: - - -* `variable`: A `Variable`. - -##### Returns: - - Returns a scalar boolean Tensor, `True` if the variable has been - initialized, `False` otherwise. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.lbeta.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.lbeta.md deleted file mode 100644 index e3ee18dfb3..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.lbeta.md +++ /dev/null @@ -1,31 +0,0 @@ -### `tf.lbeta(x, name='lbeta')` {#lbeta} - -Computes `ln(|Beta(x)|)`, reducing along the last dimension. - -Given one-dimensional `z = [z_0,...,z_{K-1}]`, we define - -```Beta(z) = \prod_j Gamma(z_j) / Gamma(\sum_j z_j)``` - -And for `n + 1` dimensional `x` with shape `[N1, ..., Nn, K]`, we define -`lbeta(x)[i1, ..., in] = Log(|Beta(x[i1, ..., in, :])|)`. In other words, -the last dimension is treated as the `z` vector. - -Note that if `z = [u, v]`, then -`Beta(z) = int_0^1 t^{u-1} (1 - t)^{v-1} dt`, which defines the traditional -bivariate beta function. - -##### Args: - - -* `x`: A rank `n + 1` `Tensor` with type `float`, or `double`. -* `name`: A name for the operation (optional). - -##### Returns: - - The logarithm of `|Beta(x)|` reducing along the last dimension. - -##### Raises: - - -* `ValueError`: If `x` is empty with rank one or less. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.listdiff.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.listdiff.md deleted file mode 100644 index 1f04bd8d9e..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.listdiff.md +++ /dev/null @@ -1,40 +0,0 @@ -### `tf.listdiff(x, y, name=None)` {#listdiff} - -Computes the difference between two lists of numbers or strings. - -Given a list `x` and a list `y`, this operation returns a list `out` that -represents all values that are in `x` but not in `y`. The returned list `out` -is sorted in the same order that the numbers appear in `x` (duplicates are -preserved). This operation also returns a list `idx` that represents the -position of each `out` element in `x`. In other words: - -`out[i] = x[idx[i]] for i in [0, 1, ..., len(out) - 1]` - -For example, given this input: - -```prettyprint -x = [1, 2, 3, 4, 5, 6] -y = [1, 3, 5] -``` - -This operation would return: - -```prettyprint -out ==> [2, 4, 6] -idx ==> [1, 3, 5] -``` - -##### Args: - - -* `x`: A `Tensor`. 1-D. Values to keep. -* `y`: A `Tensor`. Must have the same type as `x`. 1-D. Values to remove. -* `name`: A name for the operation (optional). - -##### Returns: - - A tuple of `Tensor` objects (out, idx). - -* `out`: A `Tensor`. Has the same type as `x`. 1-D. Values present in `x` but not in `y`. -* `idx`: A `Tensor` of type `int32`. 1-D. Positions of `x` values preserved in `out`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.load_op_library.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.load_op_library.md deleted file mode 100644 index 0f38dfe4d5..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.load_op_library.md +++ /dev/null @@ -1,24 +0,0 @@ -### `tf.load_op_library(library_filename)` {#load_op_library} - -Loads a TensorFlow plugin, containing custom ops and kernels. - -Pass "library_filename" to a platform-specific mechanism for dynamically -loading a library. The rules for determining the exact location of the -library are platform-specific and are not documented here. - -##### Args: - - -* `library_filename`: Path to the plugin. - Relative or absolute filesystem path to a dynamic library file. - -##### Returns: - - A python module containing the Python wrappers for Ops defined in - the plugin. - -##### Raises: - - -* `RuntimeError`: when unable to load the library or get the python wrappers. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.matrix_determinant.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.matrix_determinant.md deleted file mode 100644 index a5cd5a7fe6..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.matrix_determinant.md +++ /dev/null @@ -1,16 +0,0 @@ -### `tf.matrix_determinant(input, name=None)` {#matrix_determinant} - -Calculates the determinant of a square matrix. - -##### Args: - - -* `input`: A `Tensor`. Must be one of the following types: `float32`, `float64`. - A tensor of shape `[M, M]`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `input`. - A scalar, equal to the determinant of the input. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.moving_average_variables.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.moving_average_variables.md new file mode 100644 index 0000000000..467a666e2c --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.moving_average_variables.md @@ -0,0 +1,13 @@ +### `tf.moving_average_variables()` {#moving_average_variables} + +Returns all variables that maintain their moving averages. + +If an `ExponentialMovingAverage` object is created and the `apply()` +method is called on a list of variables, these variables will +be added to the `GraphKeys.MOVING_AVERAGE_VARIABLES` collection. +This convenience function returns the contents of that collection. + +##### Returns: + + A list of Variable objects. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.avg_pool3d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.avg_pool3d.md deleted file mode 100644 index 76503e0567..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.avg_pool3d.md +++ /dev/null @@ -1,24 +0,0 @@ -### `tf.nn.avg_pool3d(input, ksize, strides, padding, name=None)` {#avg_pool3d} - -Performs 3D average pooling on the input. - -##### Args: - - -* `input`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`. - Shape `[batch, depth, rows, cols, channels]` tensor to pool over. -* `ksize`: A list of `ints` that has length `>= 5`. - 1-D tensor of length 5. The size of the window for each dimension of - the input tensor. Must have `ksize[0] = ksize[1] = 1`. -* `strides`: A list of `ints` that has length `>= 5`. - 1-D tensor of length 5. The stride of the sliding window for each - dimension of `input`. Must have `strides[0] = strides[4] = 1`. -* `padding`: A `string` from: `"SAME", "VALID"`. - The type of padding algorithm to use. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `input`. - The average pooled output tensor. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.conv2d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.conv2d.md deleted file mode 100644 index 684a3d5727..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.conv2d.md +++ /dev/null @@ -1,49 +0,0 @@ -### `tf.nn.conv2d(input, filter, strides, padding, use_cudnn_on_gpu=None, data_format=None, name=None)` {#conv2d} - -Computes a 2-D convolution given 4-D `input` and `filter` tensors. - -Given an input tensor of shape `[batch, in_height, in_width, in_channels]` -and a filter / kernel tensor of shape -`[filter_height, filter_width, in_channels, out_channels]`, this op -performs the following: - -1. Flattens the filter to a 2-D matrix with shape - `[filter_height * filter_width * in_channels, output_channels]`. -2. Extracts image patches from the input tensor to form a *virtual* - tensor of shape `[batch, out_height, out_width, - filter_height * filter_width * in_channels]`. -3. For each patch, right-multiplies the filter matrix and the image patch - vector. - -In detail, with the default NHWC format, - - output[b, i, j, k] = - sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * - filter[di, dj, q, k] - -Must have `strides[0] = strides[3] = 1`. For the most common case of the same -horizontal and vertices strides, `strides = [1, stride, stride, 1]`. - -##### Args: - - -* `input`: A `Tensor`. Must be one of the following types: `float32`, `float64`. -* `filter`: A `Tensor`. Must have the same type as `input`. -* `strides`: A list of `ints`. - 1-D of length 4. The stride of the sliding window for each dimension - of `input`. Must be in the same order as the dimension specified with format. -* `padding`: A `string` from: `"SAME", "VALID"`. - The type of padding algorithm to use. -* `use_cudnn_on_gpu`: An optional `bool`. Defaults to `True`. -* `data_format`: An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`. - Specify the data format of the input and output data. With the - default format "NHWC", the data is stored in the order of: - [batch, in_height, in_width, in_channels]. - Alternatively, the format could be "NCHW", the data storage order of: - [batch, in_channels, in_height, in_width]. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `input`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.conv2d_transpose.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.conv2d_transpose.md deleted file mode 100644 index ee459ae0e6..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.conv2d_transpose.md +++ /dev/null @@ -1,34 +0,0 @@ -### `tf.nn.conv2d_transpose(value, filter, output_shape, strides, padding='SAME', name=None)` {#conv2d_transpose} - -The transpose of `conv2d`. - -This operation is sometimes called "deconvolution" after [Deconvolutional -Networks](http://www.matthewzeiler.com/pubs/cvpr2010/cvpr2010.pdf), but is -actually the transpose (gradient) of `conv2d` rather than an actual -deconvolution. - -##### Args: - - -* `value`: A 4-D `Tensor` of type `float` and shape - `[batch, height, width, in_channels]`. -* `filter`: A 4-D `Tensor` with the same type as `value` and shape - `[height, width, output_channels, in_channels]`. `filter`'s - `in_channels` dimension must match that of `value`. -* `output_shape`: A 1-D `Tensor` representing the output shape of the - deconvolution op. -* `strides`: A list of ints. The stride of the sliding window for each - dimension of the input tensor. -* `padding`: A string, either `'VALID'` or `'SAME'`. The padding algorithm. -* `name`: Optional name for the returned tensor. - -##### Returns: - - A `Tensor` with the same type as `value`. - -##### Raises: - - -* `ValueError`: If input/output depth does not match `filter`'s shape, or if - padding is other than `'VALID'` or `'SAME'`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.embedding_lookup.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.embedding_lookup.md deleted file mode 100644 index 588c2b393d..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.embedding_lookup.md +++ /dev/null @@ -1,50 +0,0 @@ -### `tf.nn.embedding_lookup(params, ids, partition_strategy='mod', name=None, validate_indices=True)` {#embedding_lookup} - -Looks up `ids` in a list of embedding tensors. - -This function is used to perform parallel lookups on the list of -tensors in `params`. It is a generalization of -[`tf.gather()`](../../api_docs/python/array_ops.md#gather), where `params` is -interpreted as a partition of a larger embedding tensor. - -If `len(params) > 1`, each element `id` of `ids` is partitioned between -the elements of `params` according to the `partition_strategy`. -In all strategies, if the id space does not evenly divide the number of -partitions, each of the first `(max_id + 1) % len(params)` partitions will -be assigned one more id. - -If `partition_strategy` is `"mod"`, we assign each id to partition -`p = id % len(params)`. For instance, -13 ids are split across 5 partitions as: -`[[0, 5, 10], [1, 6, 11], [2, 7, 12], [3, 8], [4, 9]]` - -If `partition_strategy` is `"div"`, we assign ids to partitions in a -contiguous manner. In this case, 13 ids are split across 5 partitions as: -`[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]` - -The results of the lookup are concatenated into a dense -tensor. The returned tensor has shape `shape(ids) + shape(params)[1:]`. - -##### Args: - - -* `params`: A list of tensors with the same type and which can be concatenated - along dimension 0. Each `Tensor` must be appropriately sized for the given - `partition_strategy`. -* `ids`: A `Tensor` with type `int32` or `int64` containing the ids to be looked - up in `params`. -* `partition_strategy`: A string specifying the partitioning strategy, relevant - if `len(params) > 1`. Currently `"div"` and `"mod"` are supported. Default - is `"mod"`. -* `name`: A name for the operation (optional). -* `validate_indices`: Whether or not to validate gather indices. - -##### Returns: - - A `Tensor` with the same type as the tensors in `params`. - -##### Raises: - - -* `ValueError`: If `params` is empty. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.embedding_lookup_sparse.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.embedding_lookup_sparse.md deleted file mode 100644 index 03997f7813..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.embedding_lookup_sparse.md +++ /dev/null @@ -1,66 +0,0 @@ -### `tf.nn.embedding_lookup_sparse(params, sp_ids, sp_weights, partition_strategy='mod', name=None, combiner='mean')` {#embedding_lookup_sparse} - -Computes embeddings for the given ids and weights. - -This op assumes that there is at least one id for each row in the dense tensor -represented by sp_ids (i.e. there are no rows with empty features), and that -all the indices of sp_ids are in canonical row-major order. - -It also assumes that all id values lie in the range [0, p0), where p0 -is the sum of the size of params along dimension 0. - -##### Args: - - -* `params`: A single tensor representing the complete embedding tensor, - or a list of P tensors all of same shape except for the first dimension, - representing sharded embedding tensors. -* `sp_ids`: N x M SparseTensor of int64 ids (typically from FeatureValueToId), - where N is typically batch size and M is arbitrary. -* `sp_weights`: either a SparseTensor of float / double weights, or None to - indicate all weights should be taken to be 1. If specified, sp_weights - must have exactly the same shape and indices as sp_ids. -* `partition_strategy`: A string specifying the partitioning strategy, relevant - if `len(params) > 1`. Currently `"div"` and `"mod"` are supported. Default - is `"mod"`. See `tf.nn.embedding_lookup` for more details. -* `name`: Optional name for the op. -* `combiner`: A string specifying the reduction op. Currently "mean", "sqrtn" - and "sum" are supported. - "sum" computes the weighted sum of the embedding results for each row. - "mean" is the weighted sum divided by the total weight. - "sqrtn" is the weighted sum divided by the square root of the sum of the - squares of the weights. - -##### Returns: - - A dense tensor representing the combined embeddings for the - sparse ids. For each row in the dense tensor represented by sp_ids, the op - looks up the embeddings for all ids in that row, multiplies them by the - corresponding weight, and combines these embeddings as specified. - - In other words, if - shape(combined params) = [p0, p1, ..., pm] - and - shape(sp_ids) = shape(sp_weights) = [d0, d1, ..., dn] - then - shape(output) = [d0, d1, ..., dn-1, p1, ..., pm]. - - For instance, if params is a 10x20 matrix, and sp_ids / sp_weights are - - [0, 0]: id 1, weight 2.0 - [0, 1]: id 3, weight 0.5 - [1, 0]: id 0, weight 1.0 - [2, 3]: id 1, weight 3.0 - - with combiner="mean", then the output will be a 3x20 matrix where - output[0, :] = (params[1, :] * 2.0 + params[3, :] * 0.5) / (2.0 + 0.5) - output[1, :] = params[0, :] * 1.0 - output[2, :] = params[1, :] * 3.0 - -##### Raises: - - -* `TypeError`: If sp_ids is not a SparseTensor, or if sp_weights is neither - None nor SparseTensor. -* `ValueError`: If combiner is not one of {"mean", "sqrtn", "sum"}. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.l2_loss.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.l2_loss.md new file mode 100644 index 0000000000..fd648ca642 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.l2_loss.md @@ -0,0 +1,19 @@ +### `tf.nn.l2_loss(t, name=None)` {#l2_loss} + +L2 Loss. + +Computes half the L2 norm of a tensor without the `sqrt`: + + output = sum(t ** 2) / 2 + +##### Args: + + +* `t`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`. + Typically 2-D, but may have any dimensions. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `t`. 0-D. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.log_softmax.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.log_softmax.md deleted file mode 100644 index 18e1f96590..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.log_softmax.md +++ /dev/null @@ -1,19 +0,0 @@ -### `tf.nn.log_softmax(logits, name=None)` {#log_softmax} - -Computes log softmax activations. - -For each batch `i` and class `j` we have - - logsoftmax[i, j] = logits[i, j] - log(sum(exp(logits[i]))) - -##### Args: - - -* `logits`: A `Tensor`. Must be one of the following types: `float32`, `float64`. - 2-D with shape `[batch_size, num_classes]`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `logits`. Same shape as `logits`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.log_uniform_candidate_sampler.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.log_uniform_candidate_sampler.md new file mode 100644 index 0000000000..baf9f9d421 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.log_uniform_candidate_sampler.md @@ -0,0 +1,56 @@ +### `tf.nn.log_uniform_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, seed=None, name=None)` {#log_uniform_candidate_sampler} + +Samples a set of classes using a log-uniform (Zipfian) base distribution. + +This operation randomly samples a tensor of sampled classes +(`sampled_candidates`) from the range of integers `[0, range_max)`. + +The elements of `sampled_candidates` are drawn without replacement +(if `unique=True`) or with replacement (if `unique=False`) from +the base distribution. + +The base distribution for this operation is an approximately log-uniform +or Zipfian distribution: + +`P(class) = (log(class + 2) - log(class + 1)) / log(range_max + 1)` + +This sampler is useful when the target classes approximately follow such +a distribution - for example, if the classes represent words in a lexicon +sorted in decreasing order of frequency. If your classes are not ordered by +decreasing frequency, do not use this op. + +In addition, this operation returns tensors `true_expected_count` +and `sampled_expected_count` representing the number of times each +of the target classes (`true_classes`) and the sampled +classes (`sampled_candidates`) is expected to occur in an average +tensor of sampled classes. These values correspond to `Q(y|x)` +defined in [this +document](http://www.tensorflow.org/extras/candidate_sampling.pdf). +If `unique=True`, then these are post-rejection probabilities and we +compute them approximately. + +##### Args: + + +* `true_classes`: A `Tensor` of type `int64` and shape `[batch_size, + num_true]`. The target classes. +* `num_true`: An `int`. The number of target classes per training example. +* `num_sampled`: An `int`. The number of classes to randomly sample per batch. +* `unique`: A `bool`. Determines whether all sampled classes in a batch are + unique. +* `range_max`: An `int`. The number of possible classes. +* `seed`: An `int`. An operation-specific seed. Default is 0. +* `name`: A name for the operation (optional). + +##### Returns: + + +* `sampled_candidates`: A tensor of type `int64` and shape `[num_sampled]`. + The sampled classes. +* `true_expected_count`: A tensor of type `float`. Same shape as + `true_classes`. The expected counts under the sampling distribution + of each of `true_classes`. +* `sampled_expected_count`: A tensor of type `float`. Same shape as + `sampled_candidates`. The expected counts under the sampling distribution + of each of `sampled_candidates`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.max_pool3d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.max_pool3d.md deleted file mode 100644 index 471fcb532f..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.max_pool3d.md +++ /dev/null @@ -1,23 +0,0 @@ -### `tf.nn.max_pool3d(input, ksize, strides, padding, name=None)` {#max_pool3d} - -Performs 3D max pooling on the input. - -##### Args: - - -* `input`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`. - Shape `[batch, depth, rows, cols, channels]` tensor to pool over. -* `ksize`: A list of `ints` that has length `>= 5`. - 1-D tensor of length 5. The size of the window for each dimension of - the input tensor. Must have `ksize[0] = ksize[1] = 1`. -* `strides`: A list of `ints` that has length `>= 5`. - 1-D tensor of length 5. The stride of the sliding window for each - dimension of `input`. Must have `strides[0] = strides[4] = 1`. -* `padding`: A `string` from: `"SAME", "VALID"`. - The type of padding algorithm to use. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `input`. The max pooled output tensor. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.zero_fraction.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.zero_fraction.md deleted file mode 100644 index f4d126a041..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.zero_fraction.md +++ /dev/null @@ -1,21 +0,0 @@ -### `tf.nn.zero_fraction(value, name=None)` {#zero_fraction} - -Returns the fraction of zeros in `value`. - -If `value` is empty, the result is `nan`. - -This is useful in summaries to measure and report sparsity. For example, - - z = tf.Relu(...) - summ = tf.scalar_summary('sparsity', tf.nn.zero_fraction(z)) - -##### Args: - - -* `value`: A tensor of numeric type. -* `name`: A name for the operation (optional). - -##### Returns: - - The fraction of zeros in `value`, with type `float32`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.ones_like.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.ones_like.md new file mode 100644 index 0000000000..2c9b04ceca --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.ones_like.md @@ -0,0 +1,28 @@ +### `tf.ones_like(tensor, dtype=None, name=None)` {#ones_like} + +Creates a tensor with all elements set to 1. + +Given a single tensor (`tensor`), this operation returns a tensor of the same +type and shape as `tensor` with all elements set to 1. Optionally, you can +specify a new type (`dtype`) for the returned tensor. + +For example: + +```python +# 'tensor' is [[1, 2, 3], [4, 5, 6]] +tf.ones_like(tensor) ==> [[1, 1, 1], [1, 1, 1]] +``` + +##### Args: + + +* `tensor`: A `Tensor`. +* `dtype`: A type for the returned `Tensor`. Must be `float32`, `float64`, + `int8`, `int16`, `int32`, `int64`, `uint8`, or `complex64`. + +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` with all elements set to 1. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.placeholder_with_default.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.placeholder_with_default.md new file mode 100644 index 0000000000..2719b876f1 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.placeholder_with_default.md @@ -0,0 +1,17 @@ +### `tf.placeholder_with_default(input, shape, name=None)` {#placeholder_with_default} + +A placeholder op that passes though `input` when its output is not fed. + +##### Args: + + +* `input`: A `Tensor`. The default value to produce when `output` is not fed. +* `shape`: A `tf.TensorShape` or list of `ints`. + The (possibly partial) shape of the tensor. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `input`. + A placeholder tensor that defaults to `input` if it is not fed. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.python_io.TFRecordWriter.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.python_io.TFRecordWriter.md deleted file mode 100644 index 4a67724209..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.python_io.TFRecordWriter.md +++ /dev/null @@ -1,41 +0,0 @@ -A class to write records to a TFRecords file. - -This class implements `__enter__` and `__exit__`, and can be used -in `with` blocks like a normal file. - -- - - - -#### `tf.python_io.TFRecordWriter.__init__(path)` {#TFRecordWriter.__init__} - -Opens file `path` and creates a `TFRecordWriter` writing to it. - -##### Args: - - -* `path`: The path to the TFRecords file. - -##### Raises: - - -* `IOError`: If `path` cannot be opened for writing. - - -- - - - -#### `tf.python_io.TFRecordWriter.write(record)` {#TFRecordWriter.write} - -Write a string record to the file. - -##### Args: - - -* `record`: str - - -- - - - -#### `tf.python_io.TFRecordWriter.close()` {#TFRecordWriter.close} - -Close the file. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.random_crop.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.random_crop.md deleted file mode 100644 index d389872919..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.random_crop.md +++ /dev/null @@ -1,25 +0,0 @@ -### `tf.random_crop(value, size, seed=None, name=None)` {#random_crop} - -Randomly crops a tensor to a given size. - -Slices a shape `size` portion out of `value` at a uniformly chosen offset. -Requires `value.shape >= size`. - -If a dimension should not be cropped, pass the full size of that dimension. -For example, RGB images can be cropped with -`size = [crop_height, crop_width, 3]`. - -##### Args: - - -* `value`: Input tensor to crop. -* `size`: 1-D tensor with size the rank of `value`. -* `seed`: Python integer. Used to create a random seed. See - [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) - for behavior. -* `name`: A name for this operation (optional). - -##### Returns: - - A cropped tensor of the same rank as `value` and shape `size`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.random_uniform.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.random_uniform.md new file mode 100644 index 0000000000..517bdd98c4 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.random_uniform.md @@ -0,0 +1,41 @@ +### `tf.random_uniform(shape, minval=0, maxval=None, dtype=tf.float32, seed=None, name=None)` {#random_uniform} + +Outputs random values from a uniform distribution. + +The generated values follow a uniform distribution in the range +`[minval, maxval)`. The lower bound `minval` is included in the range, while +the upper bound `maxval` is excluded. + +For floats, the default range is `[0, 1)`. For ints, at least `maxval` must +be specified explicitly. + +In the integer case, the random integers are slightly biased unless +`maxval - minval` is an exact power of two. The bias is small for values of +`maxval - minval` significantly smaller than the range of the output (either +`2**32` or `2**64`). + +##### Args: + + +* `shape`: A 1-D integer Tensor or Python array. The shape of the output tensor. +* `minval`: A 0-D Tensor or Python value of type `dtype`. The lower bound on the + range of random values to generate. Defaults to 0. +* `maxval`: A 0-D Tensor or Python value of type `dtype`. The upper bound on + the range of random values to generate. Defaults to 1 if `dtype` is + floating point. +* `dtype`: The type of the output: `float32`, `float64`, `int32`, or `int64`. +* `seed`: A Python integer. Used to create a random seed for the distribution. + See + [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) + for behavior. +* `name`: A name for the operation (optional). + +##### Returns: + + A tensor of the specified shape filled with random uniform values. + +##### Raises: + + +* `ValueError`: If `dtype` is integral and `maxval` is not specified. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.range.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.range.md new file mode 100644 index 0000000000..c33825d3be --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.range.md @@ -0,0 +1,37 @@ +### `tf.range(start, limit=None, delta=1, name='range')` {#range} + +Creates a sequence of integers. + +Creates a sequence of integers that begins at `start` and extends by +increments of `delta` up to but not including `limit`. + +Like the Python builtin `range`, `start` defaults to 0, so that +`range(n) = range(0, n)`. + +For example: + +``` +# 'start' is 3 +# 'limit' is 18 +# 'delta' is 3 +tf.range(start, limit, delta) ==> [3, 6, 9, 12, 15] + +# 'limit' is 5 +tf.range(limit) ==> [0, 1, 2, 3, 4] +``` + +##### Args: + + +* `start`: A 0-D (scalar) of type `int32`. First entry in sequence. + Defaults to 0. +* `limit`: A 0-D (scalar) of type `int32`. Upper limit of sequence, + exclusive. +* `delta`: A 0-D `Tensor` (scalar) of type `int32`. Optional. Default is 1. + Number that increments `start`. +* `name`: A name for the operation (optional). + +##### Returns: + + An 1-D `int32` `Tensor`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.rank.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.rank.md new file mode 100644 index 0000000000..8d8fdb4af4 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.rank.md @@ -0,0 +1,28 @@ +### `tf.rank(input, name=None)` {#rank} + +Returns the rank of a tensor. + +This operation returns an integer representing the rank of `input`. + +For example: + +```prettyprint +# 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]] +# shape of tensor 't' is [2, 2, 3] +rank(t) ==> 3 +``` + +**Note**: The rank of a tensor is not the same as the rank of a matrix. The rank +of a tensor is the number of indices required to uniquely select each element +of the tensor. Rank is also known as "order", "degree", or "ndims." + +##### Args: + + +* `input`: A `Tensor`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `int32`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.reduce_join.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.reduce_join.md new file mode 100644 index 0000000000..c65c6022ba --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.reduce_join.md @@ -0,0 +1,49 @@ +### `tf.reduce_join(inputs, reduction_indices, keep_dims=None, separator=None, name=None)` {#reduce_join} + +Joins a string Tensor across the given dimensions. + +Computes the string join across dimensions in the given string Tensor of shape +`[d_0, d_1, ..., d_n-1]`. Returns a new Tensor created by joining the input +strings with the given separator (default: empty string). Negative indices are +counted backwards from the end, with `-1` being equivalent to `n - 1`. Passing +an empty `reduction_indices` joins all strings in linear index order and outputs +a scalar string. + + +For example: +``` +# tensor `a` is [["a", "b"], ["c", "d"]] +tf.reduce_join(a, 0) ==> ["ac", "bd"] +tf.reduce_join(a, 1) ==> ["ab", "cd"] +tf.reduce_join(a, -2) = tf.reduce_join(a, 0) ==> ["ac", "bd"] +tf.reduce_join(a, -1) = tf.reduce_join(a, 1) ==> ["ab", "cd"] +tf.reduce_join(a, 0, keep_dims=True) ==> [["ac", "bd"]] +tf.reduce_join(a, 1, keep_dims=True) ==> [["ab"], ["cd"]] +tf.reduce_join(a, 0, separator=".") ==> ["a.c", "b.d"] +tf.reduce_join(a, [0, 1]) ==> ["acbd"] +tf.reduce_join(a, [1, 0]) ==> ["abcd"] +tf.reduce_join(a, []) ==> ["abcd"] +``` + +##### Args: + + +* `inputs`: A `Tensor` of type `string`. + The input to be joined. All reduced indices must have non-zero size. +* `reduction_indices`: A `Tensor` of type `int32`. + The dimensions to reduce over. Dimensions are reduced in the + order specified. If `reduction_indices` has higher rank than `1`, it is + flattened. Omitting `reduction_indices` is equivalent to passing + `[n-1, n-2, ..., 0]`. Negative indices from `-n` to `-1` are supported. +* `keep_dims`: An optional `bool`. Defaults to `False`. + If `True`, retain reduced dimensions with length `1`. +* `separator`: An optional `string`. Defaults to `""`. + The separator to use when joining. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `string`. + Has shape equal to that of the input with reduced dimensions removed or + set to `1` depending on `keep_dims`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.reduce_max.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.reduce_max.md deleted file mode 100644 index f137e8091c..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.reduce_max.md +++ /dev/null @@ -1,25 +0,0 @@ -### `tf.reduce_max(input_tensor, reduction_indices=None, keep_dims=False, name=None)` {#reduce_max} - -Computes the maximum of elements across dimensions of a tensor. - -Reduces `input_tensor` along the dimensions given in `reduction_indices`. -Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each -entry in `reduction_indices`. If `keep_dims` is true, the reduced dimensions -are retained with length 1. - -If `reduction_indices` has no entries, all dimensions are reduced, and a -tensor with a single element is returned. - -##### Args: - - -* `input_tensor`: The tensor to reduce. Should have numeric type. -* `reduction_indices`: The dimensions to reduce. If `None` (the default), - reduces all dimensions. -* `keep_dims`: If true, retains reduced dimensions with length 1. -* `name`: A name for the operation (optional). - -##### Returns: - - The reduced tensor. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.reduce_sum.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.reduce_sum.md new file mode 100644 index 0000000000..edbb1ab055 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.reduce_sum.md @@ -0,0 +1,37 @@ +### `tf.reduce_sum(input_tensor, reduction_indices=None, keep_dims=False, name=None)` {#reduce_sum} + +Computes the sum of elements across dimensions of a tensor. + +Reduces `input_tensor` along the dimensions given in `reduction_indices`. +Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each +entry in `reduction_indices`. If `keep_dims` is true, the reduced dimensions +are retained with length 1. + +If `reduction_indices` has no entries, all dimensions are reduced, and a +tensor with a single element is returned. + +For example: + +```python +# 'x' is [[1, 1, 1] +# [1, 1, 1]] +tf.reduce_sum(x) ==> 6 +tf.reduce_sum(x, 0) ==> [2, 2, 2] +tf.reduce_sum(x, 1) ==> [3, 3] +tf.reduce_sum(x, 1, keep_dims=True) ==> [[3], [3]] +tf.reduce_sum(x, [0, 1]) ==> 6 +``` + +##### Args: + + +* `input_tensor`: The tensor to reduce. Should have numeric type. +* `reduction_indices`: The dimensions to reduce. If `None` (the default), + reduces all dimensions. +* `keep_dims`: If true, retains reduced dimensions with length 1. +* `name`: A name for the operation (optional). + +##### Returns: + + The reduced tensor. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.reverse.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.reverse.md deleted file mode 100644 index e316d5faae..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.reverse.md +++ /dev/null @@ -1,61 +0,0 @@ -### `tf.reverse(tensor, dims, name=None)` {#reverse} - -Reverses specific dimensions of a tensor. - -Given a `tensor`, and a `bool` tensor `dims` representing the dimensions -of `tensor`, this operation reverses each dimension i of `tensor` where -`dims[i]` is `True`. - -`tensor` can have up to 8 dimensions. The number of dimensions -of `tensor` must equal the number of elements in `dims`. In other words: - -`rank(tensor) = size(dims)` - -For example: - -```prettyprint -# tensor 't' is [[[[ 0, 1, 2, 3], -# [ 4, 5, 6, 7], -# [ 8, 9, 10, 11]], -# [[12, 13, 14, 15], -# [16, 17, 18, 19], -# [20, 21, 22, 23]]]] -# tensor 't' shape is [1, 2, 3, 4] - -# 'dims' is [False, False, False, True] -reverse(t, dims) ==> [[[[ 3, 2, 1, 0], - [ 7, 6, 5, 4], - [ 11, 10, 9, 8]], - [[15, 14, 13, 12], - [19, 18, 17, 16], - [23, 22, 21, 20]]]] - -# 'dims' is [False, True, False, False] -reverse(t, dims) ==> [[[[12, 13, 14, 15], - [16, 17, 18, 19], - [20, 21, 22, 23] - [[ 0, 1, 2, 3], - [ 4, 5, 6, 7], - [ 8, 9, 10, 11]]]] - -# 'dims' is [False, False, True, False] -reverse(t, dims) ==> [[[[8, 9, 10, 11], - [4, 5, 6, 7], - [0, 1, 2, 3]] - [[20, 21, 22, 23], - [16, 17, 18, 19], - [12, 13, 14, 15]]]] -``` - -##### Args: - - -* `tensor`: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int32`, `bool`, `float32`, `float64`. - Up to 8-D. -* `dims`: A `Tensor` of type `bool`. 1-D. The dimensions to reverse. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `tensor`. The same shape as `tensor`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.saturate_cast.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.saturate_cast.md new file mode 100644 index 0000000000..6a77c2791e --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.saturate_cast.md @@ -0,0 +1,19 @@ +### `tf.saturate_cast(value, dtype, name=None)` {#saturate_cast} + +Performs a safe saturating cast of `value` to `dtype`. + +This function casts the input to `dtype` without applying any scaling. If +there is a danger that values would over or underflow in the cast, this op +applies the appropriate clamping before the cast. + +##### Args: + + +* `value`: A `Tensor`. +* `dtype`: The desired output `DType`. +* `name`: A name for the operation (optional). + +##### Returns: + + `value` safely cast to `dtype`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.scatter_add.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.scatter_add.md deleted file mode 100644 index a8f8b7a9b0..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.scatter_add.md +++ /dev/null @@ -1,46 +0,0 @@ -### `tf.scatter_add(ref, indices, updates, use_locking=None, name=None)` {#scatter_add} - -Adds sparse updates to a variable reference. - -This operation computes - - # Scalar indices - ref[indices, ...] += updates[...] - - # Vector indices (for each i) - ref[indices[i], ...] += updates[i, ...] - - # High rank indices (for each i, ..., j) - ref[indices[i, ..., j], ...] += updates[i, ..., j, ...] - -This operation outputs `ref` after the update is done. -This makes it easier to chain operations that need to use the reset value. - -Duplicate entries are handled correctly: if multiple `indices` reference -the same location, their contributions add. - -Requires `updates.shape = indices.shape + ref.shape[1:]`. - -
- -
- -##### Args: - - -* `ref`: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`. - Should be from a `Variable` node. -* `indices`: A `Tensor`. Must be one of the following types: `int32`, `int64`. - A tensor of indices into the first dimension of `ref`. -* `updates`: A `Tensor`. Must have the same type as `ref`. - A tensor of updated values to add to `ref`. -* `use_locking`: An optional `bool`. Defaults to `False`. - If True, the addition will be protected by a lock; - otherwise the behavior is undefined, but may exhibit less contention. -* `name`: A name for the operation (optional). - -##### Returns: - - Same as `ref`. Returned as a convenience for operations that want - to use the updated values after the update is done. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.size.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.size.md deleted file mode 100644 index 67f1bc4885..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.size.md +++ /dev/null @@ -1,24 +0,0 @@ -### `tf.size(input, name=None)` {#size} - -Returns the size of a tensor. - -This operation returns an integer representing the number of elements in -`input`. - -For example: - -```prettyprint -# 't' is [[[1, 1,, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]] -size(t) ==> 12 -``` - -##### Args: - - -* `input`: A `Tensor`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `int32`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_add.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_add.md new file mode 100644 index 0000000000..4835ae70e5 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_add.md @@ -0,0 +1,55 @@ +### `tf.sparse_add(a, b, thresh=0)` {#sparse_add} + +Adds two tensors, at least one of each is a `SparseTensor`. + +If one `SparseTensor` and one `Tensor` are passed in, returns a `Tensor`. If +both arguments are `SparseTensor`s, this returns a `SparseTensor`. The order +of arguments does not matter. Use vanilla `tf.add()` for adding two dense +`Tensor`s. + +The indices of any input `SparseTensor` are assumed ordered in standard +lexicographic order. If this is not the case, before this step run +`SparseReorder` to restore index ordering. + +If both arguments are sparse, we perform "clipping" as follows. By default, +if two values sum to zero at some index, the output `SparseTensor` would still +include that particular location in its index, storing a zero in the +corresponding value slot. To override this, callers can specify `thresh`, +indicating that if the sum has a magnitude strictly smaller than `thresh`, its +corresponding value and index would then not be included. In particular, +`thresh == 0.0` (default) means everything is kept and actual thresholding +happens only for a positive value. + +For example, suppose the logical sum of two sparse operands is (densified): + + [ 2] + [.1 0] + [ 6 -.2] + +Then, + + - thresh == 0 (the default): all 5 index/value pairs will be returned. + - thresh == 0.11: only .1 and 0 will vanish, and the remaining three + index/value pairs will be returned. + - thresh == 0.21: .1, 0, and -.2 will vanish. + +##### Args: + + +* `a`: The first operand; `SparseTensor` or `Tensor`. +* `b`: The second operand; `SparseTensor` or `Tensor`. At least one operand + must be sparse. +* `thresh`: A 0-D `Tensor`. The magnitude threshold that determines if an + output value/index pair takes space. Its dtype should match that of the + values if they are real; if the latter are complex64/complex128, then the + dtype should be float32/float64, correspondingly. + +##### Returns: + + A `SparseTensor` or a `Tensor`, representing the sum. + +##### Raises: + + +* `TypeError`: If both `a` and `b` are `Tensor`s. Use `tf.add()` instead. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_reset_shape.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_reset_shape.md new file mode 100644 index 0000000000..d0606cdc5d --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_reset_shape.md @@ -0,0 +1,60 @@ +### `tf.sparse_reset_shape(sp_input, new_shape=None)` {#sparse_reset_shape} + +Resets the shape of a `SparseTensor` with indices and values unchanged. + +If `new_shape` is None, returns a copy of `sp_input` with its shape reset +to the tight bounding box of `sp_input`. + +If `new_shape` is provided, then it must be larger or equal in all dimensions +compared to the shape of `sp_input`. When this condition is met, the returned +SparseTensor will have its shape reset to `new_shape` and its indices and +values unchanged from that of `sp_input.` + +For example: + + Consider a `sp_input` with shape [2, 3, 5]: + + [0, 0, 1]: a + [0, 1, 0]: b + [0, 2, 2]: c + [1, 0, 3]: d + + - It is an error to set `new_shape` as [3, 7] since this represents a + rank-2 tensor while `sp_input` is rank-3. This is either a ValueError + during graph construction (if both shapes are known) or an OpError during + run time. + + - Setting `new_shape` as [2, 3, 6] will be fine as this shape is larger or + eqaul in every dimension compared to the original shape [2, 3, 5]. + + - On the other hand, setting new_shape as [2, 3, 4] is also an error: The + third dimension is smaller than the original shape [2, 3, 5] (and an + `InvalidArgumentError` will be raised). + + - If `new_shape` is None, the returned SparseTensor will have a shape + [2, 3, 4], which is the tight bounding box of `sp_input`. + +##### Args: + + +* `sp_input`: The input `SparseTensor`. +* `new_shape`: None or a vector representing the new shape for the returned + `SpraseTensor`. + +##### Returns: + + A `SparseTensor` indices and values unchanged from `input_sp`. Its shape is + `new_shape` if that is set. Otherwise it is the tight bounding box of + `input_sp` + +##### Raises: + + +* `TypeError`: If `sp_input` is not a `SparseTensor`. +* `ValueError`: If `new_shape` represents a tensor with a different rank from + that of `sp_input` (if shapes are known when graph is constructed). +* `OpError`: + - If `new_shape` has dimension sizes that are too small. + - If shapes are not known during graph construction time, and during run + time it is found out that the ranks do not match. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_retain.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_retain.md deleted file mode 100644 index dcaa303627..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_retain.md +++ /dev/null @@ -1,33 +0,0 @@ -### `tf.sparse_retain(sp_input, to_retain)` {#sparse_retain} - -Retains specified non-empty values within a `SparseTensor`. - -For example, if `sp_input` has shape `[4, 5]` and 4 non-empty string values: - - [0, 1]: a - [0, 3]: b - [2, 0]: c - [3, 1]: d - -and `to_retain = [True, False, False, True]`, then the output will -be a `SparseTensor` of shape `[4, 5]` with 2 non-empty values: - - [0, 1]: a - [3, 1]: d - -##### Args: - - -* `sp_input`: The input `SparseTensor` with `N` non-empty elements. -* `to_retain`: A bool vector of length `N` with `M` true values. - -##### Returns: - - A `SparseTensor` with the same shape as the input and `M` non-empty - elements corresponding to the true positions in `to_retain`. - -##### Raises: - - -* `TypeError`: If `sp_input` is not a `SparseTensor`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_segment_sqrt_n.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_segment_sqrt_n.md deleted file mode 100644 index bc665a42a8..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_segment_sqrt_n.md +++ /dev/null @@ -1,26 +0,0 @@ -### `tf.sparse_segment_sqrt_n(data, indices, segment_ids, name=None)` {#sparse_segment_sqrt_n} - -Computes the sum along sparse segments of a tensor divided by the sqrt of N. - -N is the size of the segment being reduced. - -Read [the section on -Segmentation](../../api_docs/python/math_ops.md#segmentation) for an explanation -of segments. - -##### Args: - - -* `data`: A `Tensor`. Must be one of the following types: `float32`, `float64`. -* `indices`: A `Tensor` of type `int32`. - A 1-D tensor. Has same rank as `segment_ids`. -* `segment_ids`: A `Tensor` of type `int32`. - A 1-D tensor. Values should be sorted and can be repeated. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `data`. - Has same shape as data, except for dimension 0 which - has size `k`, the number of segments. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_softmax.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_softmax.md deleted file mode 100644 index cb54fd9452..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_softmax.md +++ /dev/null @@ -1,51 +0,0 @@ -### `tf.sparse_softmax(sp_input, name=None)` {#sparse_softmax} - -Applies softmax to a batched N-D `SparseTensor`. - -The inputs represent an N-D SparseTensor with logical shape `[..., B, C]` -(where `N >= 2`), and with indices sorted in the canonical lexicographic -order. - -This op is equivalent to applying the normal `tf.nn.softmax()` to each -innermost logical submatrix with shape `[B, C]`, but with the catch that *the -implicitly zero elements do not participate*. Specifically, the algorithm is -equivalent to: - - (1) Applies `tf.nn.softmax()` to a densified view of each innermost - submatrix with shape `[B, C]`, along the size-C dimension; - (2) Masks out the original implicitly-zero locations; - (3) Renormalizes the remaining elements. - -Hence, the `SparseTensor` result has exactly the same non-zero indices and -shape. - -Example: -```python -# First batch: -# [? e.] -# [1. ? ] -# Second batch: -# [e ? ] -# [e e ] -shape = [2, 2, 2] # 3-D SparseTensor -values = np.asarray([[[0., np.e], [1., 0.]], [[np.e, 0.], [np.e, np.e]]]) -indices = np.vstack(np.where(values)).astype(np.int64).T - -result = tf.sparse_softmax(tf.SparseTensor(indices, values, shape)) -# ...returning a 3-D SparseTensor, equivalent to: -# [? 1.] [1 ?] -# [1. ? ] and [.5 .5] -# where ? means implicitly zero. -``` - -##### Args: - - -* `sp_input`: N-D `SparseTensor`, where `N >= 2`. -* `name`: optional name of the operation. - -##### Returns: - - -* `output`: N-D `SparseTensor` representing the results. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_tensor_dense_matmul.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_tensor_dense_matmul.md deleted file mode 100644 index 5bb99ef029..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_tensor_dense_matmul.md +++ /dev/null @@ -1,163 +0,0 @@ -### `tf.sparse_tensor_dense_matmul(sp_a, b, adjoint_a=False, adjoint_b=False, name=None)` {#sparse_tensor_dense_matmul} - -Multiply SparseTensor (of rank 2) "A" by dense matrix "B". - -No validity checking is performed on the indices of A. However, the following -input format is recommended for optimal behavior: - -if adjoint_a == false: - A should be sorted in lexicographically increasing order. Use - sparse_reorder if you're not sure. -if adjoint_a == true: - A should be sorted in order of increasing dimension 1 (i.e., "column major" - order instead of "row major" order). - -Deciding when to use sparse_tensor_dense_matmul vs. matmul(sp_a=True): - -There are a number of questions to ask in the decision process, including: - -* Will the SparseTensor A fit in memory if densified? -* Is the column count of the product large (>> 1)? -* Is the density of A larger than approximately 15%? - -If the answer to several of these questions is yes, consider -converting the SparseTensor to a dense one and using tf.matmul with sp_a=True. - -This operation tends to perform well when A is more sparse, if the column size -of the product is small (e.g. matrix-vector multiplication), if sp_a.shape -takes on large values. - -Below is a rough speed comparison between sparse_tensor_dense_matmul, -labelled 'sparse', and matmul(sp_a=True), labelled 'dense'. For purposes of -the comparison, the time spent converting from a SparseTensor to a dense -Tensor is not included, so it is overly conservative with respect to -the time ratio. - -Benchmark system: -CPU: Intel Ivybridge with HyperThreading (6 cores) dL1:32KB dL2:256KB dL3:12MB -GPU: NVidia Tesla k40c - -Compiled with: --c opt --config=cuda --copt=-mavx - -```tensorflow/python/sparse_tensor_dense_matmul_op_test --benchmarks -A sparse [m, k] with % nonzero values between 1% and 80% -B dense [k, n] - -% nnz n gpu m k dt(dense) dt(sparse) dt(sparse)/dt(dense) -0.01 1 True 100 100 0.000221166 0.00010154 0.459112 -0.01 1 True 100 1000 0.00033858 0.000109275 0.322745 -0.01 1 True 1000 100 0.000310557 9.85661e-05 0.317385 -0.01 1 True 1000 1000 0.0008721 0.000100875 0.115669 -0.01 1 False 100 100 0.000208085 0.000107603 0.51711 -0.01 1 False 100 1000 0.000327112 9.51118e-05 0.290762 -0.01 1 False 1000 100 0.000308222 0.00010345 0.335635 -0.01 1 False 1000 1000 0.000865721 0.000101397 0.117124 -0.01 10 True 100 100 0.000218522 0.000105537 0.482958 -0.01 10 True 100 1000 0.000340882 0.000111641 0.327506 -0.01 10 True 1000 100 0.000315472 0.000117376 0.372064 -0.01 10 True 1000 1000 0.000905493 0.000123263 0.136128 -0.01 10 False 100 100 0.000221529 9.82571e-05 0.44354 -0.01 10 False 100 1000 0.000330552 0.000112615 0.340687 -0.01 10 False 1000 100 0.000341277 0.000114097 0.334324 -0.01 10 False 1000 1000 0.000819944 0.000120982 0.147549 -0.01 25 True 100 100 0.000207806 0.000105977 0.509981 -0.01 25 True 100 1000 0.000322879 0.00012921 0.400181 -0.01 25 True 1000 100 0.00038262 0.000141583 0.370035 -0.01 25 True 1000 1000 0.000865438 0.000202083 0.233504 -0.01 25 False 100 100 0.000209401 0.000104696 0.499979 -0.01 25 False 100 1000 0.000321161 0.000130737 0.407076 -0.01 25 False 1000 100 0.000377012 0.000136801 0.362856 -0.01 25 False 1000 1000 0.000861125 0.00020272 0.235413 -0.2 1 True 100 100 0.000206952 9.69219e-05 0.46833 -0.2 1 True 100 1000 0.000348674 0.000147475 0.422959 -0.2 1 True 1000 100 0.000336908 0.00010122 0.300439 -0.2 1 True 1000 1000 0.001022 0.000203274 0.198898 -0.2 1 False 100 100 0.000207532 9.5412e-05 0.459746 -0.2 1 False 100 1000 0.000356127 0.000146824 0.41228 -0.2 1 False 1000 100 0.000322664 0.000100918 0.312764 -0.2 1 False 1000 1000 0.000998987 0.000203442 0.203648 -0.2 10 True 100 100 0.000211692 0.000109903 0.519165 -0.2 10 True 100 1000 0.000372819 0.000164321 0.440753 -0.2 10 True 1000 100 0.000338651 0.000144806 0.427596 -0.2 10 True 1000 1000 0.00108312 0.000758876 0.70064 -0.2 10 False 100 100 0.000215727 0.000110502 0.512231 -0.2 10 False 100 1000 0.000375419 0.0001613 0.429653 -0.2 10 False 1000 100 0.000336999 0.000145628 0.432132 -0.2 10 False 1000 1000 0.00110502 0.000762043 0.689618 -0.2 25 True 100 100 0.000218705 0.000129913 0.594009 -0.2 25 True 100 1000 0.000394794 0.00029428 0.745402 -0.2 25 True 1000 100 0.000404483 0.0002693 0.665788 -0.2 25 True 1000 1000 0.0012002 0.00194494 1.62052 -0.2 25 False 100 100 0.000221494 0.0001306 0.589632 -0.2 25 False 100 1000 0.000396436 0.000297204 0.74969 -0.2 25 False 1000 100 0.000409346 0.000270068 0.659754 -0.2 25 False 1000 1000 0.00121051 0.00193737 1.60046 -0.5 1 True 100 100 0.000214981 9.82111e-05 0.456836 -0.5 1 True 100 1000 0.000415328 0.000223073 0.537101 -0.5 1 True 1000 100 0.000358324 0.00011269 0.314492 -0.5 1 True 1000 1000 0.00137612 0.000437401 0.317851 -0.5 1 False 100 100 0.000224196 0.000101423 0.452386 -0.5 1 False 100 1000 0.000400987 0.000223286 0.556841 -0.5 1 False 1000 100 0.000368825 0.00011224 0.304318 -0.5 1 False 1000 1000 0.00136036 0.000429369 0.31563 -0.5 10 True 100 100 0.000222125 0.000112308 0.505608 -0.5 10 True 100 1000 0.000461088 0.00032357 0.701753 -0.5 10 True 1000 100 0.000394624 0.000225497 0.571422 -0.5 10 True 1000 1000 0.00158027 0.00190898 1.20801 -0.5 10 False 100 100 0.000232083 0.000114978 0.495418 -0.5 10 False 100 1000 0.000454574 0.000324632 0.714146 -0.5 10 False 1000 100 0.000379097 0.000227768 0.600817 -0.5 10 False 1000 1000 0.00160292 0.00190168 1.18638 -0.5 25 True 100 100 0.00023429 0.000151703 0.647501 -0.5 25 True 100 1000 0.000497462 0.000598873 1.20386 -0.5 25 True 1000 100 0.000460778 0.000557038 1.20891 -0.5 25 True 1000 1000 0.00170036 0.00467336 2.74845 -0.5 25 False 100 100 0.000228981 0.000155334 0.678371 -0.5 25 False 100 1000 0.000496139 0.000620789 1.25124 -0.5 25 False 1000 100 0.00045473 0.000551528 1.21287 -0.5 25 False 1000 1000 0.00171793 0.00467152 2.71927 -0.8 1 True 100 100 0.000222037 0.000105301 0.47425 -0.8 1 True 100 1000 0.000410804 0.000329327 0.801664 -0.8 1 True 1000 100 0.000349735 0.000131225 0.375212 -0.8 1 True 1000 1000 0.00139219 0.000677065 0.48633 -0.8 1 False 100 100 0.000214079 0.000107486 0.502085 -0.8 1 False 100 1000 0.000413746 0.000323244 0.781261 -0.8 1 False 1000 100 0.000348983 0.000131983 0.378193 -0.8 1 False 1000 1000 0.00136296 0.000685325 0.50282 -0.8 10 True 100 100 0.000229159 0.00011825 0.516017 -0.8 10 True 100 1000 0.000498845 0.000532618 1.0677 -0.8 10 True 1000 100 0.000383126 0.00029935 0.781336 -0.8 10 True 1000 1000 0.00162866 0.00307312 1.88689 -0.8 10 False 100 100 0.000230783 0.000124958 0.541452 -0.8 10 False 100 1000 0.000493393 0.000550654 1.11606 -0.8 10 False 1000 100 0.000377167 0.000298581 0.791642 -0.8 10 False 1000 1000 0.00165795 0.00305103 1.84024 -0.8 25 True 100 100 0.000233496 0.000175241 0.75051 -0.8 25 True 100 1000 0.00055654 0.00102658 1.84458 -0.8 25 True 1000 100 0.000463814 0.000783267 1.68875 -0.8 25 True 1000 1000 0.00186905 0.00755344 4.04132 -0.8 25 False 100 100 0.000240243 0.000175047 0.728625 -0.8 25 False 100 1000 0.000578102 0.00104499 1.80763 -0.8 25 False 1000 100 0.000485113 0.000776849 1.60138 -0.8 25 False 1000 1000 0.00211448 0.00752736 3.55992 -``` - -##### Args: - - -* `sp_a`: SparseTensor A, of rank 2. -* `b`: A dense Matrix with the same dtype as sp_a. -* `adjoint_a`: Use the adjoint of A in the matrix multiply. If A is complex, - this is transpose(conj(A)). Otherwise it's transpose(A). -* `adjoint_b`: Use the adjoint of B in the matrix multiply. If B is complex, - this is transpose(conj(B)). Otherwise it's transpose(B). -* `name`: A name prefix for the returned tensors (optional) - -##### Returns: - - A dense matrix (pseudo-code in dense np.matrix notation): - A = A.H if adjoint_a else A - B = B.H if adjoint_b else B - return A*B - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_tensor_to_dense.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_tensor_to_dense.md new file mode 100644 index 0000000000..a0c0a6ca9c --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_tensor_to_dense.md @@ -0,0 +1,43 @@ +### `tf.sparse_tensor_to_dense(sp_input, default_value=0, validate_indices=True, name=None)` {#sparse_tensor_to_dense} + +Converts a `SparseTensor` into a dense tensor. + +This op is a convenience wrapper around `sparse_to_dense` for `SparseTensor`s. + +For example, if `sp_input` has shape `[3, 5]` and non-empty string values: + + [0, 1]: a + [0, 3]: b + [2, 0]: c + +and `default_value` is `x`, then the output will be a dense `[3, 5]` +string tensor with values: + + [[x a x b x] + [x x x x x] + [c x x x x]] + +Indices must be without repeats. This is only +tested if validate_indices is True. + +##### Args: + + +* `sp_input`: The input `SparseTensor`. +* `default_value`: Scalar value to set for indices not specified in + `sp_input`. Defaults to zero. +* `validate_indices`: A boolean value. If `True`, indices are checked to make + sure they are sorted in lexicographic order and that there are no repeats. +* `name`: A name prefix for the returned tensors (optional). + +##### Returns: + + A dense tensor with shape `sp_input.shape` and values specified by + the non-empty values in `sp_input`. Indices not in `sp_input` are assigned + `default_value`. + +##### Raises: + + +* `TypeError`: If `sp_input` is not a `SparseTensor`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.split.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.split.md deleted file mode 100644 index f6cc936328..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.split.md +++ /dev/null @@ -1,29 +0,0 @@ -### `tf.split(split_dim, num_split, value, name='split')` {#split} - -Splits a tensor into `num_split` tensors along one dimension. - -Splits `value` along dimension `split_dim` into `num_split` smaller tensors. -Requires that `num_split` evenly divide `value.shape[split_dim]`. - -For example: - -```python -# 'value' is a tensor with shape [5, 30] -# Split 'value' into 3 tensors along dimension 1 -split0, split1, split2 = tf.split(1, 3, value) -tf.shape(split0) ==> [5, 10] -``` - -##### Args: - - -* `split_dim`: A 0-D `int32` `Tensor`. The dimension along which to split. - Must be in the range `[0, rank(value))`. -* `num_split`: A Python integer. The number of ways to split. -* `value`: The `Tensor` to split. -* `name`: A name for the operation (optional). - -##### Returns: - - `num_split` `Tensor` objects resulting from splitting `value`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sqrt.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sqrt.md new file mode 100644 index 0000000000..250817f3bf --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sqrt.md @@ -0,0 +1,16 @@ +### `tf.sqrt(x, name=None)` {#sqrt} + +Computes square root of x element-wise. + +I.e., \\(y = \sqrt{x} = x^{1/2}\\). + +##### Args: + + +* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `x`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.square.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.square.md deleted file mode 100644 index 649c763015..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.square.md +++ /dev/null @@ -1,16 +0,0 @@ -### `tf.square(x, name=None)` {#square} - -Computes square of x element-wise. - -I.e., \\(y = x * x = x^2\\). - -##### Args: - - -* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `x`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.ExponentialMovingAverage.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.ExponentialMovingAverage.md new file mode 100644 index 0000000000..ea0aa48161 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.ExponentialMovingAverage.md @@ -0,0 +1,229 @@ +Maintains moving averages of variables by employing an exponential decay. + +When training a model, it is often beneficial to maintain moving averages of +the trained parameters. Evaluations that use averaged parameters sometimes +produce significantly better results than the final trained values. + +The `apply()` method adds shadow copies of trained variables and add ops that +maintain a moving average of the trained variables in their shadow copies. +It is used when building the training model. The ops that maintain moving +averages are typically run after each training step. +The `average()` and `average_name()` methods give access to the shadow +variables and their names. They are useful when building an evaluation +model, or when restoring a model from a checkpoint file. They help use the +moving averages in place of the last trained values for evaluations. + +The moving averages are computed using exponential decay. You specify the +decay value when creating the `ExponentialMovingAverage` object. The shadow +variables are initialized with the same initial values as the trained +variables. When you run the ops to maintain the moving averages, each +shadow variable is updated with the formula: + + `shadow_variable -= (1 - decay) * (shadow_variable - variable)` + +This is mathematically equivalent to the classic formula below, but the use +of an `assign_sub` op (the `"-="` in the formula) allows concurrent lockless +updates to the variables: + + `shadow_variable = decay * shadow_variable + (1 - decay) * variable` + +Reasonable values for `decay` are close to 1.0, typically in the +multiple-nines range: 0.999, 0.9999, etc. + +Example usage when creating a training model: + +```python +# Create variables. +var0 = tf.Variable(...) +var1 = tf.Variable(...) +# ... use the variables to build a training model... +... +# Create an op that applies the optimizer. This is what we usually +# would use as a training op. +opt_op = opt.minimize(my_loss, [var0, var1]) + +# Create an ExponentialMovingAverage object +ema = tf.train.ExponentialMovingAverage(decay=0.9999) + +# Create the shadow variables, and add ops to maintain moving averages +# of var0 and var1. +maintain_averages_op = ema.apply([var0, var1]) + +# Create an op that will update the moving averages after each training +# step. This is what we will use in place of the usual training op. +with tf.control_dependencies([opt_op]): + training_op = tf.group(maintain_averages_op) + +...train the model by running training_op... +``` + +There are two ways to use the moving averages for evaluations: + +* Build a model that uses the shadow variables instead of the variables. + For this, use the `average()` method which returns the shadow variable + for a given variable. +* Build a model normally but load the checkpoint files to evaluate by using + the shadow variable names. For this use the `average_name()` method. See + the [Saver class](../../api_docs/python/train.md#Saver) for more + information on restoring saved variables. + +Example of restoring the shadow variable values: + +```python +# Create a Saver that loads variables from their saved shadow values. +shadow_var0_name = ema.average_name(var0) +shadow_var1_name = ema.average_name(var1) +saver = tf.train.Saver({shadow_var0_name: var0, shadow_var1_name: var1}) +saver.restore(...checkpoint filename...) +# var0 and var1 now hold the moving average values +``` + +- - - + +#### `tf.train.ExponentialMovingAverage.__init__(decay, num_updates=None, name='ExponentialMovingAverage')` {#ExponentialMovingAverage.__init__} + +Creates a new ExponentialMovingAverage object. + +The `apply()` method has to be called to create shadow variables and add +ops to maintain moving averages. + +The optional `num_updates` parameter allows one to tweak the decay rate +dynamically. . It is typical to pass the count of training steps, usually +kept in a variable that is incremented at each step, in which case the +decay rate is lower at the start of training. This makes moving averages +move faster. If passed, the actual decay rate used is: + + `min(decay, (1 + num_updates) / (10 + num_updates))` + +##### Args: + + +* `decay`: Float. The decay to use. +* `num_updates`: Optional count of number of updates applied to variables. +* `name`: String. Optional prefix name to use for the name of ops added in + `apply()`. + + +- - - + +#### `tf.train.ExponentialMovingAverage.apply(var_list=None)` {#ExponentialMovingAverage.apply} + +Maintains moving averages of variables. + +`var_list` must be a list of `Variable` or `Tensor` objects. This method +creates shadow variables for all elements of `var_list`. Shadow variables +for `Variable` objects are initialized to the variable's initial value. +They will be added to the `GraphKeys.MOVING_AVERAGE_VARIABLES` collection. +For `Tensor` objects, the shadow variables are initialized to 0. + +shadow variables are created with `trainable=False` and added to the +`GraphKeys.ALL_VARIABLES` collection. They will be returned by calls to +`tf.all_variables()`. + +Returns an op that updates all shadow variables as described above. + +Note that `apply()` can be called multiple times with different lists of +variables. + +##### Args: + + +* `var_list`: A list of Variable or Tensor objects. The variables + and Tensors must be of types float32 or float64. + +##### Returns: + + An Operation that updates the moving averages. + +##### Raises: + + +* `TypeError`: If the arguments are not all float32 or float64. +* `ValueError`: If the moving average of one of the variables is already + being computed. + + +- - - + +#### `tf.train.ExponentialMovingAverage.average_name(var)` {#ExponentialMovingAverage.average_name} + +Returns the name of the `Variable` holding the average for `var`. + +The typical scenario for `ExponentialMovingAverage` is to compute moving +averages of variables during training, and restore the variables from the +computed moving averages during evaluations. + +To restore variables, you have to know the name of the shadow variables. +That name and the original variable can then be passed to a `Saver()` object +to restore the variable from the moving average value with: + `saver = tf.train.Saver({ema.average_name(var): var})` + +`average_name()` can be called whether or not `apply()` has been called. + +##### Args: + + +* `var`: A `Variable` object. + +##### Returns: + + A string: The name of the variable that will be used or was used + by the `ExponentialMovingAverage class` to hold the moving average of + `var`. + + +- - - + +#### `tf.train.ExponentialMovingAverage.average(var)` {#ExponentialMovingAverage.average} + +Returns the `Variable` holding the average of `var`. + +##### Args: + + +* `var`: A `Variable` object. + +##### Returns: + + A `Variable` object or `None` if the moving average of `var` + is not maintained.. + + +- - - + +#### `tf.train.ExponentialMovingAverage.variables_to_restore(moving_avg_variables=None)` {#ExponentialMovingAverage.variables_to_restore} + +Returns a map of names to `Variables` to restore. + +If a variable has a moving average, use the moving average variable name as +the restore name; otherwise, use the variable name. + +For example, + +```python + variables_to_restore = ema.variables_to_restore() + saver = tf.train.Saver(variables_to_restore) +``` + +Below is an example of such mapping: + +``` + conv/batchnorm/gamma/ExponentialMovingAverage: conv/batchnorm/gamma, + conv_4/conv2d_params/ExponentialMovingAverage: conv_4/conv2d_params, + global_step: global_step +``` + +##### Args: + + +* `moving_avg_variables`: a list of variables that require to use of the + moving variable name to be restored. If None, it will default to + variables.moving_average_variables() + variables.trainable_variables() + +##### Returns: + + A map from restore_names to variables. The restore_name can be the + moving_average version of the variable name if it exist, or the original + variable name. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.FtrlOptimizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.FtrlOptimizer.md new file mode 100644 index 0000000000..4fe719ee6b --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.FtrlOptimizer.md @@ -0,0 +1,32 @@ +Optimizer that implements the FTRL algorithm. + +See this [paper]( +https://www.eecs.tufts.edu/~dsculley/papers/ad-click-prediction.pdf). + +- - - + +#### `tf.train.FtrlOptimizer.__init__(learning_rate, learning_rate_power=-0.5, initial_accumulator_value=0.1, l1_regularization_strength=0.0, l2_regularization_strength=0.0, use_locking=False, name='Ftrl')` {#FtrlOptimizer.__init__} + +Construct a new FTRL optimizer. + +##### Args: + + +* `learning_rate`: A float value or a constant float `Tensor`. +* `learning_rate_power`: A float value, must be less or equal to zero. +* `initial_accumulator_value`: The starting value for accumulators. + Only positive values are allowed. +* `l1_regularization_strength`: A float value, must be greater than or + equal to zero. +* `l2_regularization_strength`: A float value, must be greater than or + equal to zero. +* `use_locking`: If `True` use locks for update operations. +* `name`: Optional name prefix for the operations created when applying + gradients. Defaults to "Ftrl". + +##### Raises: + + +* `ValueError`: If one of the arguments is invalid. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.Optimizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.Optimizer.md new file mode 100644 index 0000000000..d5d8bb13dd --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.Optimizer.md @@ -0,0 +1,255 @@ +Base class for optimizers. + +This class defines the API to add Ops to train a model. You never use this +class directly, but instead instantiate one of its subclasses such as +`GradientDescentOptimizer`, `AdagradOptimizer`, or `MomentumOptimizer`. + +### Usage + +```python +# Create an optimizer with the desired parameters. +opt = GradientDescentOptimizer(learning_rate=0.1) +# Add Ops to the graph to minimize a cost by updating a list of variables. +# "cost" is a Tensor, and the list of variables contains tf.Variable +# objects. +opt_op = opt.minimize(cost, var_list=) +``` + +In the training program you will just have to run the returned Op. + +```python +# Execute opt_op to do one step of training: +opt_op.run() +``` + +### Processing gradients before applying them. + +Calling `minimize()` takes care of both computing the gradients and +applying them to the variables. If you want to process the gradients +before applying them you can instead use the optimizer in three steps: + +1. Compute the gradients with `compute_gradients()`. +2. Process the gradients as you wish. +3. Apply the processed gradients with `apply_gradients()`. + +Example: + +```python +# Create an optimizer. +opt = GradientDescentOptimizer(learning_rate=0.1) + +# Compute the gradients for a list of variables. +grads_and_vars = opt.compute_gradients(loss, ) + +# grads_and_vars is a list of tuples (gradient, variable). Do whatever you +# need to the 'gradient' part, for example cap them, etc. +capped_grads_and_vars = [(MyCapper(gv[0]), gv[1]) for gv in grads_and_vars] + +# Ask the optimizer to apply the capped gradients. +opt.apply_gradients(capped_grads_and_vars) +``` + +- - - + +#### `tf.train.Optimizer.__init__(use_locking, name)` {#Optimizer.__init__} + +Create a new Optimizer. + +This must be called by the constructors of subclasses. + +##### Args: + + +* `use_locking`: Bool. If True apply use locks to prevent concurrent updates + to variables. +* `name`: A non-empty string. The name to use for accumulators created + for the optimizer. + +##### Raises: + + +* `ValueError`: If name is malformed. + + + +- - - + +#### `tf.train.Optimizer.minimize(loss, global_step=None, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None)` {#Optimizer.minimize} + +Add operations to minimize `loss` by updating `var_list`. + +This method simply combines calls `compute_gradients()` and +`apply_gradients()`. If you want to process the gradient before applying +them call `compute_gradients()` and `apply_gradients()` explicitly instead +of using this function. + +##### Args: + + +* `loss`: A `Tensor` containing the value to minimize. +* `global_step`: Optional `Variable` to increment by one after the + variables have been updated. +* `var_list`: Optional list of `Variable` objects to update to minimize + `loss`. Defaults to the list of variables collected in the graph + under the key `GraphKeys.TRAINABLE_VARIABLES`. +* `gate_gradients`: How to gate the computation of gradients. Can be + `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`. +* `aggregation_method`: Specifies the method used to combine gradient terms. + Valid values are defined in the class `AggregationMethod`. +* `colocate_gradients_with_ops`: If True, try colocating gradients with + the corresponding op. +* `name`: Optional name for the returned operation. +* `grad_loss`: Optional. A `Tensor` holding the gradient computed for `loss`. + +##### Returns: + + An Operation that updates the variables in `var_list`. If `global_step` + was not `None`, that operation also increments `global_step`. + +##### Raises: + + +* `ValueError`: If some of the variables are not `Variable` objects. + + +- - - + +#### `tf.train.Optimizer.compute_gradients(loss, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None)` {#Optimizer.compute_gradients} + +Compute gradients of `loss` for the variables in `var_list`. + +This is the first part of `minimize()`. It returns a list +of (gradient, variable) pairs where "gradient" is the gradient +for "variable". Note that "gradient" can be a `Tensor`, an +`IndexedSlices`, or `None` if there is no gradient for the +given variable. + +##### Args: + + +* `loss`: A Tensor containing the value to minimize. +* `var_list`: Optional list of tf.Variable to update to minimize + `loss`. Defaults to the list of variables collected in the graph + under the key `GraphKey.TRAINABLE_VARIABLES`. +* `gate_gradients`: How to gate the computation of gradients. Can be + `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`. +* `aggregation_method`: Specifies the method used to combine gradient terms. + Valid values are defined in the class `AggregationMethod`. +* `colocate_gradients_with_ops`: If True, try colocating gradients with + the corresponding op. +* `grad_loss`: Optional. A `Tensor` holding the gradient computed for `loss`. + +##### Returns: + + A list of (gradient, variable) pairs. + +##### Raises: + + +* `TypeError`: If `var_list` contains anything else than `Variable` objects. +* `ValueError`: If some arguments are invalid. + + +- - - + +#### `tf.train.Optimizer.apply_gradients(grads_and_vars, global_step=None, name=None)` {#Optimizer.apply_gradients} + +Apply gradients to variables. + +This is the second part of `minimize()`. It returns an `Operation` that +applies gradients. + +##### Args: + + +* `grads_and_vars`: List of (gradient, variable) pairs as returned by + `compute_gradients()`. +* `global_step`: Optional `Variable` to increment by one after the + variables have been updated. +* `name`: Optional name for the returned operation. Default to the + name passed to the `Optimizer` constructor. + +##### Returns: + + An `Operation` that applies the specified gradients. If `global_step` + was not None, that operation also increments `global_step`. + +##### Raises: + + +* `TypeError`: If `grads_and_vars` is malformed. +* `ValueError`: If none of the variables have gradients. + + + +### Gating Gradients + +Both `minimize()` and `compute_gradients()` accept a `gate_gradient` argument +that controls the degree of parallelism during the application of the +gradients. + +The possible values are: `GATE_NONE`, `GATE_OP`, and `GATE_GRAPH`. + +`GATE_NONE`: Compute and apply gradients in parallel. This provides +the maximum parallelism in execution, at the cost of some non-reproducibility +in the results. For example the two gradients of `matmul` depend on the input +values: With `GATE_NONE` one of the gradients could be applied to one of the +inputs _before_ the other gradient is computed resulting in non-reproducible +results. + +`GATE_OP`: For each Op, make sure all gradients are computed before +they are used. This prevents race conditions for Ops that generate gradients +for multiple inputs where the gradients depend on the inputs. + +`GATE_GRAPH`: Make sure all gradients for all variables are computed +before any one of them is used. This provides the least parallelism but can +be useful if you want to process all gradients before applying any of them. + +### Slots + +Some optimizer subclasses, such as `MomentumOptimizer` and `AdagradOptimizer` +allocate and manage additional variables associated with the variables to +train. These are called Slots. Slots have names and you can ask the +optimizer for the names of the slots that it uses. Once you have a slot name +you can ask the optimizer for the variable it created to hold the slot value. + +This can be useful if you want to log debug a training algorithm, report stats +about the slots, etc. + +- - - + +#### `tf.train.Optimizer.get_slot_names()` {#Optimizer.get_slot_names} + +Return a list of the names of slots created by the `Optimizer`. + +See `get_slot()`. + +##### Returns: + + A list of strings. + + +- - - + +#### `tf.train.Optimizer.get_slot(var, name)` {#Optimizer.get_slot} + +Return a slot named `name` created for `var` by the Optimizer. + +Some `Optimizer` subclasses use additional variables. For example +`Momentum` and `Adagrad` use variables to accumulate updates. This method +gives access to these `Variable` objects if for some reason you need them. + +Use `get_slot_names()` to get the list of slot names created by the +`Optimizer`. + +##### Args: + + +* `var`: A variable passed to `minimize()` or `apply_gradients()`. +* `name`: A string. + +##### Returns: + + The `Variable` for the slot if it was created, `None` otherwise. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.QueueRunner.from_proto.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.QueueRunner.from_proto.md new file mode 100644 index 0000000000..2caa8f769a --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.QueueRunner.from_proto.md @@ -0,0 +1,4 @@ +#### `tf.train.QueueRunner.from_proto(queue_runner_def)` {#QueueRunner.from_proto} + +Returns a `QueueRunner` object created from `queue_runner_def`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.Server.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.Server.md deleted file mode 100644 index 3f87ed3bf0..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.Server.md +++ /dev/null @@ -1,113 +0,0 @@ -An in-process TensorFlow server, for use in distributed training. - -A `tf.train.Server` instance encapsulates a set of devices and a -[`tf.Session`](../../api_docs/python/client.md#Session) target that -can participate in distributed training. A server belongs to a -cluster (specified by a [`tf.train.ClusterSpec`](#ClusterSpec)), and -corresponds to a particular task in a named job. The server can -communicate with any other server in the same cluster. - -- - - - -#### `tf.train.Server.__init__(server_or_cluster_def, job_name=None, task_index=None, protocol=None, start=True)` {#Server.__init__} - -Creates a new server with the given definition. - -The `job_name`, `task_index`, and `protocol` arguments are optional, and -override any information provided in `server_or_cluster_def`. - -##### Args: - - -* `server_or_cluster_def`: A `tf.train.ServerDef` or - `tf.train.ClusterDef` protocol buffer, or a - `tf.train.ClusterSpec` object, describing the server to be - created and/or the cluster of which it is a member. -* `job_name`: (Optional.) Specifies the name of the job of which the server - is a member. Defaults to the value in `server_or_cluster_def`, if - specified. -* `task_index`: (Optional.) Specifies the task index of the server in its - job. Defaults to the value in `server_or_cluster_def`, if specified. - Otherwise defaults to 0 if the server's job has only one task. -* `protocol`: (Optional.) Specifies the protocol to be used by the server. - Acceptable values include `"grpc"`. Defaults to the value in - `server_or_cluster_def`, if specified. Otherwise defaults to `"grpc"`. -* `start`: (Optional.) Boolean, indicating whether to start the server - after creating it. Defaults to `True`. - -##### Raises: - - tf.errors.OpError: Or one of its subclasses if an error occurs while - creating the TensorFlow server. - - -- - - - -#### `tf.train.Server.create_local_server(start=True)` {#Server.create_local_server} - -Creates a new single-process cluster running on the local host. - -This method is a convenience wrapper for creating a -`tf.train.Server` with a `tf.train.ServerDef` that specifies a -single-process cluster containing a single task in a job called -`"local"`. - -##### Args: - - -* `start`: (Optional.) Boolean, indicating whether to start the server after - creating it. Defaults to `True`. - -##### Returns: - - A local `tf.train.Server`. - - -- - - - -#### `tf.train.Server.target` {#Server.target} - -Returns the target for a `tf.Session` to connect to this server. - -To create a -[`tf.Session`](../../api_docs/python/client.md#Session) that -connects to this server, use the following snippet: - -```python -server = tf.train.Server(...) -with tf.Session(server.target): - # ... -``` - -##### Returns: - - A string containing a session target for this server. - - - -- - - - -#### `tf.train.Server.start()` {#Server.start} - -Starts this server. - -##### Raises: - - tf.errors.OpError: Or one of its subclasses if an error occurs while - starting the TensorFlow server. - - -- - - - -#### `tf.train.Server.join()` {#Server.join} - -Blocks until the server has shut down. - -This method currently blocks forever. - -##### Raises: - - tf.errors.OpError: Or one of its subclasses if an error occurs while - joining the TensorFlow server. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.batch.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.batch.md deleted file mode 100644 index 96142e0719..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.batch.md +++ /dev/null @@ -1,68 +0,0 @@ -### `tf.train.batch(tensors, batch_size, num_threads=1, capacity=32, enqueue_many=False, shapes=None, dynamic_pad=False, shared_name=None, name=None)` {#batch} - -Creates batches of tensors in `tensors`. - -The argument `tensors` can be a list or a dictionary of tensors. -The value returned by the function will be of the same type -as `tensors`. - -This function is implemented using a queue. A `QueueRunner` for the -queue is added to the current `Graph`'s `QUEUE_RUNNER` collection. - -If `enqueue_many` is `False`, `tensors` is assumed to represent a single -example. An input tensor with shape `[x, y, z]` will be output as a tensor -with shape `[batch_size, x, y, z]`. - -If `enqueue_many` is `True`, `tensors` is assumed to represent a batch of -examples, where the first dimension is indexed by example, and all members of -`tensor_list` should have the same size in the first dimension. If an input -tensor has shape `[*, x, y, z]`, the output will have shape `[batch_size, x, -y, z]`. The `capacity` argument controls the how long the prefetching is -allowed to grow the queues. - -The returned operation is a dequeue operation and will throw -`tf.errors.OutOfRangeError` if the input queue is exhausted. If this -operation is feeding another input queue, its queue runner will catch -this exception, however, if this operation is used in your main thread -you are responsible for catching this yourself. - -*N.B.:* If `dynamic_pad` is `False`, you must ensure that either -(i) the `shapes` argument is passed, or (ii) all of the tensors in -`tensors` must have fully-defined shapes. `ValueError` will be -raised if neither of these conditions holds. - -If `dynamic_pad` is `True`, it is sufficient that the *rank* of the -tensors is known, but individual dimensions may have shape `None`. -In this case, for each enqueue the dimensions with value `None` -may have a variable length; upon dequeue, the output tensors will be padded -on the right to the maximum shape of the tensors in the current minibatch. -For numbers, this padding takes value 0. For strings, this padding is -the empty string. See `PaddingFIFOQueue` for more info. - -##### Args: - - -* `tensors`: The list or dictionary of tensors to enqueue. -* `batch_size`: The new batch size pulled from the queue. -* `num_threads`: The number of threads enqueuing `tensor_list`. -* `capacity`: An integer. The maximum number of elements in the queue. -* `enqueue_many`: Whether each tensor in `tensor_list` is a single example. -* `shapes`: (Optional) The shapes for each example. Defaults to the - inferred shapes for `tensor_list`. -* `dynamic_pad`: Boolean. Allow variable dimensions in input shapes. - The given dimensions are padded upon dequeue so that tensors within a - batch have the same shapes. -* `shared_name`: (optional). If set, this queue will be shared under the given - name across multiple sessions. -* `name`: (Optional) A name for the operations. - -##### Returns: - - A list or dictionary of tensors with the same types as `tensors`. - -##### Raises: - - -* `ValueError`: If the `shapes` are not specified, and cannot be - inferred from the elements of `tensors`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.replica_device_setter.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.replica_device_setter.md new file mode 100644 index 0000000000..a5ea200562 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.replica_device_setter.md @@ -0,0 +1,50 @@ +### `tf.train.replica_device_setter(ps_tasks=0, ps_device='/job:ps', worker_device='/job:worker', merge_devices=True, cluster=None, ps_ops=None)` {#replica_device_setter} + +Return a `device function` to use when building a Graph for replicas. + +Device Functions are used in `with tf.device(device_function):` statement to +automatically assign devices to `Operation` objects as they are constructed, +Device constraints are added from the inner-most context first, working +outwards. The merging behavior adds constraints to fields that are yet unset +by a more inner context. Currently the fields are (job, task, cpu/gpu). + +If `cluster` is `None`, and `ps_tasks` is 0, the returned function is a no-op. + +For example, + +```python +# To build a cluster with two ps jobs on hosts ps0 and ps1, and 3 worker +# jobs on hosts worker0, worker1 and worker2. +cluster_spec = { + "ps": ["ps0:2222", "ps1:2222"], + "worker": ["worker0:2222", "worker1:2222", "worker2:2222"]} +with tf.device(tf.replica_device_setter(cluster=cluster_spec)): + # Build your graph + v1 = tf.Variable(...) # assigned to /job:ps/task:0 + v2 = tf.Variable(...) # assigned to /job:ps/task:1 + v3 = tf.Variable(...) # assigned to /job:ps/task:0 +# Run compute +``` + +##### Args: + + +* `ps_tasks`: Number of tasks in the `ps` job. +* `ps_device`: String. Device of the `ps` job. If empty no `ps` job is used. + Defaults to `ps`. +* `worker_device`: String. Device of the `worker` job. If empty no `worker` + job is used. +* `merge_devices`: `Boolean`. If `True`, merges or only sets a device if the + device constraint is completely unset. merges device specification rather + than overriding them. +* `cluster`: `ClusterDef` proto or `ClusterSpec`. +* `ps_ops`: List of `Operation` objects that need to be placed on `ps` devices. + +##### Returns: + + A function to pass to `tf.device()`. + +##### Raises: + + TypeError if `cluster` is not a dictionary or `ClusterDef` protocol buffer. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.tuple.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.tuple.md deleted file mode 100644 index 503a98d625..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.tuple.md +++ /dev/null @@ -1,36 +0,0 @@ -### `tf.tuple(tensors, name=None, control_inputs=None)` {#tuple} - -Group tensors together. - -This creates a tuple of tensors with the same values as the `tensors` -argument, except that the value of each tensor is only returned after the -values of all tensors have been computed. - -`control_inputs` contains additional ops that have to finish before this op -finishes, but whose outputs are not returned. - -This can be used as a "join" mechanism for parallel computations: all the -argument tensors can be computed in parallel, but the values of any tensor -returned by `tuple` are only available after all the parallel computations -are done. - -See also `group` and `with_dependencies`. - -##### Args: - - -* `tensors`: A list of `Tensor`s or `IndexedSlices`, some entries can be `None`. -* `name`: (optional) A name to use as a `name_scope` for the operation. -* `control_inputs`: List of additional ops to finish before returning. - -##### Returns: - - Same as `tensors`. - -##### Raises: - - -* `ValueError`: If `tensors` does not contain any `Tensor` or `IndexedSlices`. -* `TypeError`: If `control_inputs` is not a list of `Operation` or `Tensor` - objects. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.unsorted_segment_sum.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.unsorted_segment_sum.md new file mode 100644 index 0000000000..63255ce815 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.unsorted_segment_sum.md @@ -0,0 +1,38 @@ +### `tf.unsorted_segment_sum(data, segment_ids, num_segments, name=None)` {#unsorted_segment_sum} + +Computes the sum along segments of a tensor. + +Read [the section on +Segmentation](../../api_docs/python/math_ops.md#segmentation) for an explanation +of segments. + +Computes a tensor such that +\\(output_i = \sum_j data_j\\) where sum is over `j` such +that `segment_ids[j] == i`. Unlike `SegmentSum`, `segment_ids` +need not be sorted and need not cover all values in the full + range of valid values. + +If the sum is empty for a given segment ID `i`, `output[i] = 0`. + +`num_segments` should equal the number of distinct segment IDs. + +
+ +
+ +##### Args: + + +* `data`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`. +* `segment_ids`: A `Tensor`. Must be one of the following types: `int32`, `int64`. + A 1-D tensor whose rank is equal to the rank of `data`'s + first dimension. +* `num_segments`: A `Tensor` of type `int32`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `data`. + Has same shape as data, except for dimension 0 which + has size `num_segments`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.where.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.where.md new file mode 100644 index 0000000000..eae2259721 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.where.md @@ -0,0 +1,46 @@ +### `tf.where(input, name=None)` {#where} + +Returns locations of true values in a boolean tensor. + +This operation returns the coordinates of true elements in `input`. The +coordinates are returned in a 2-D tensor where the first dimension (rows) +represents the number of true elements, and the second dimension (columns) +represents the coordinates of the true elements. Keep in mind, the shape of +the output tensor can vary depending on how many true values there are in +`input`. Indices are output in row-major order. + +For example: + +```prettyprint +# 'input' tensor is [[True, False] +# [True, False]] +# 'input' has two true values, so output has two coordinates. +# 'input' has rank of 2, so coordinates have two indices. +where(input) ==> [[0, 0], + [1, 0]] + +# `input` tensor is [[[True, False] +# [True, False]] +# [[False, True] +# [False, True]] +# [[False, False] +# [False, True]]] +# 'input' has 5 true values, so output has 5 coordinates. +# 'input' has rank of 3, so coordinates have three indices. +where(input) ==> [[0, 0, 0], + [0, 1, 0], + [1, 0, 1], + [1, 1, 1], + [2, 1, 1]] +``` + +##### Args: + + +* `input`: A `Tensor` of type `bool`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `int64`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.while_loop.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.while_loop.md new file mode 100644 index 0000000000..4baea56c63 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.while_loop.md @@ -0,0 +1,60 @@ +### `tf.while_loop(cond, body, loop_vars, parallel_iterations=10, back_prop=True, swap_memory=False, name=None)` {#while_loop} + +Repeat `body` while the condition `cond` is true. + +`cond` is a callable returning a boolean scalar tensor. `body` is a callable +returning a list of tensors of the same length and with the same types as +`loop_vars`. `loop_vars` is a list of tensors that is passed to both `cond` +and `body`. `cond` and `body` both take as many arguments as there are +`loop_vars`. + +In addition to regular Tensors or IndexedSlices, the body may accept and +return TensorArray objects. The flows of the TensorArray objects will +be appropriately forwarded between loops and during gradient calculations. + +While `cond` evaluates to true, `body` is executed. + +`while_loop` implements non-strict semantics, enabling multiple iterations +to run in parallel. The maximum number of parallel iterations can be +controlled by `parallel_iterations`, which gives users some control over +memory consumption and execution order. For correct programs, `while_loop` +should return the same result for any parallel_iterations > 0. + +For training, TensorFlow remembers the tensors that are produced in the +forward inference but needed in back propagation. These tensors can be a +main source of memory consumption and often cause OOM problems when training +on GPUs. When the flag swap_memory is true, we swap out these tensors from +GPU to CPU. This for example allows us to train RNN models with very long +sequences and large batches. + +##### Args: + + +* `cond`: A callable that represents the termination condition of the loop. +* `body`: A callable that represents the loop body. +* `loop_vars`: The list of variable input tensors. +* `parallel_iterations`: The number of iterations allowed to run in parallel. +* `back_prop`: Whether backprop is enabled for this while loop. +* `swap_memory`: Whether GPU-CPU memory swap is enabled for this loop. +* `name`: Optional name prefix for the returned tensors. + +##### Returns: + + The output tensors for the loop variables after the loop. + +##### Raises: + + +* `TypeError`: if `cond` or `body` is not callable. +* `ValueError`: if `loop_var` is empty. + + +* `Example`: + + ```python + i = tf.constant(0) + c = lambda i: tf.less(i, 10) + b = lambda i: tf.add(i, 1) + r = tf.while_loop(c, b, [i]) + ``` + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.zeros.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.zeros.md new file mode 100644 index 0000000000..57598a372d --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.zeros.md @@ -0,0 +1,24 @@ +### `tf.zeros(shape, dtype=tf.float32, name=None)` {#zeros} + +Creates a tensor with all elements set to zero. + +This operation returns a tensor of type `dtype` with shape `shape` and +all elements set to zero. + +For example: + +```python +tf.zeros([3, 4], int32) ==> [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]] +``` + +##### Args: + + +* `shape`: Either a list of integers, or a 1-D `Tensor` of type `int32`. +* `dtype`: The type of an element in the resulting `Tensor`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` with all elements set to zero. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.zeros_initializer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.zeros_initializer.md deleted file mode 100644 index 707393f8be..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.zeros_initializer.md +++ /dev/null @@ -1,4 +0,0 @@ -### `tf.zeros_initializer(shape, dtype=tf.float32)` {#zeros_initializer} - -An adaptor for zeros() to match the Initializer spec. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.AggregationMethod.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.AggregationMethod.md deleted file mode 100644 index ee655fbd25..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.AggregationMethod.md +++ /dev/null @@ -1,10 +0,0 @@ -A class listing aggregation methods used to combine gradients. - -Computing partial derivatives can require aggregating gradient -contributions. This class lists the various methods that can -be used to combine gradients in the graph: - -* `ADD_N`: All of the gradient terms are summed as part of one - operation using the "AddN" op. It has the property that all - gradients must be ready before any aggregation is performed. -* `DEFAULT`: The system-chosen default aggregation method. diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.DeviceSpec.from_string.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.DeviceSpec.from_string.md new file mode 100644 index 0000000000..5cbba0ada6 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.DeviceSpec.from_string.md @@ -0,0 +1,18 @@ +#### `tf.DeviceSpec.from_string(spec)` {#DeviceSpec.from_string} + +Construct a `DeviceSpec` from a string. + +##### Args: + + +* `spec`: a string of the form + /job:/replica:/task:/device:CPU: + or + /job:/replica:/task:/device:GPU: + as cpu and gpu are mutually exclusive. + All entries are optional. + +##### Returns: + + A DeviceSpec. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.FIFOQueue.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.FIFOQueue.md deleted file mode 100644 index 129107384f..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.FIFOQueue.md +++ /dev/null @@ -1,41 +0,0 @@ -A queue implementation that dequeues elements in first-in-first out order. - -See [`tf.QueueBase`](#QueueBase) for a description of the methods on -this class. - -- - - - -#### `tf.FIFOQueue.__init__(capacity, dtypes, shapes=None, names=None, shared_name=None, name='fifo_queue')` {#FIFOQueue.__init__} - -Creates a queue that dequeues elements in a first-in first-out order. - -A `FIFOQueue` has bounded capacity; supports multiple concurrent -producers and consumers; and provides exactly-once delivery. - -A `FIFOQueue` holds a list of up to `capacity` elements. Each -element is a fixed-length tuple of tensors whose dtypes are -described by `dtypes`, and whose shapes are optionally described -by the `shapes` argument. - -If the `shapes` argument is specified, each component of a queue -element must have the respective fixed shape. If it is -unspecified, different queue elements may have different shapes, -but the use of `dequeue_many` is disallowed. - -##### Args: - - -* `capacity`: An integer. The upper bound on the number of elements - that may be stored in this queue. -* `dtypes`: A list of `DType` objects. The length of `dtypes` must equal - the number of tensors in each queue element. -* `shapes`: (Optional.) A list of fully-defined `TensorShape` objects - with the same length as `dtypes`, or `None`. -* `names`: (Optional.) A list of string naming the components in the queue - with the same length as `dtypes`, or `None`. If specified the dequeue - methods return a dictionary with the names as keys. -* `shared_name`: (Optional.) If non-empty, this queue will be shared under - the given name across multiple sessions. -* `name`: Optional name for the queue operation. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.IdentityReader.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.IdentityReader.md new file mode 100644 index 0000000000..46ba1e9d17 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.IdentityReader.md @@ -0,0 +1,148 @@ +A Reader that outputs the queued work as both the key and value. + +To use, enqueue strings in a Queue. Read will take the front +work string and output (work, work). + +See ReaderBase for supported methods. +- - - + +#### `tf.IdentityReader.__init__(name=None)` {#IdentityReader.__init__} + +Create a IdentityReader. + +##### Args: + + +* `name`: A name for the operation (optional). + + +- - - + +#### `tf.IdentityReader.num_records_produced(name=None)` {#IdentityReader.num_records_produced} + +Returns the number of records this reader has produced. + +This is the same as the number of Read executions that have +succeeded. + +##### Args: + + +* `name`: A name for the operation (optional). + +##### Returns: + + An int64 Tensor. + + +- - - + +#### `tf.IdentityReader.num_work_units_completed(name=None)` {#IdentityReader.num_work_units_completed} + +Returns the number of work units this reader has finished processing. + +##### Args: + + +* `name`: A name for the operation (optional). + +##### Returns: + + An int64 Tensor. + + +- - - + +#### `tf.IdentityReader.read(queue, name=None)` {#IdentityReader.read} + +Returns the next record (key, value pair) produced by a reader. + +Will dequeue a work unit from queue if necessary (e.g. when the +Reader needs to start reading from a new file since it has +finished with the previous file). + +##### Args: + + +* `queue`: A Queue or a mutable string Tensor representing a handle + to a Queue, with string work items. +* `name`: A name for the operation (optional). + +##### Returns: + + A tuple of Tensors (key, value). + +* `key`: A string scalar Tensor. +* `value`: A string scalar Tensor. + + +- - - + +#### `tf.IdentityReader.reader_ref` {#IdentityReader.reader_ref} + +Op that implements the reader. + + +- - - + +#### `tf.IdentityReader.reset(name=None)` {#IdentityReader.reset} + +Restore a reader to its initial clean state. + +##### Args: + + +* `name`: A name for the operation (optional). + +##### Returns: + + The created Operation. + + +- - - + +#### `tf.IdentityReader.restore_state(state, name=None)` {#IdentityReader.restore_state} + +Restore a reader to a previously saved state. + +Not all Readers support being restored, so this can produce an +Unimplemented error. + +##### Args: + + +* `state`: A string Tensor. + Result of a SerializeState of a Reader with matching type. +* `name`: A name for the operation (optional). + +##### Returns: + + The created Operation. + + +- - - + +#### `tf.IdentityReader.serialize_state(name=None)` {#IdentityReader.serialize_state} + +Produce a string tensor that encodes the state of a reader. + +Not all Readers support being serialized, so this can produce an +Unimplemented error. + +##### Args: + + +* `name`: A name for the operation (optional). + +##### Returns: + + A string Tensor. + + +- - - + +#### `tf.IdentityReader.supports_serialize` {#IdentityReader.supports_serialize} + +Whether the Reader implementation can serialize its state. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.NoGradient.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.NoGradient.md new file mode 100644 index 0000000000..15c40e6828 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.NoGradient.md @@ -0,0 +1,23 @@ +### `tf.NoGradient(op_type)` {#NoGradient} + +Specifies that ops of type `op_type` do not have a defined gradient. + +This function is only used when defining a new op type. It may be +used for ops such as `tf.size()` that are not differentiable. For +example: + +```python +tf.NoGradient("Size") +``` + +##### Args: + + +* `op_type`: The string type of an operation. This corresponds to the + `OpDef.name` field for the proto that defines the operation. + +##### Raises: + + +* `TypeError`: If `op_type` is not a string. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.RegisterGradient.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.RegisterGradient.md new file mode 100644 index 0000000000..736bd5b4af --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.RegisterGradient.md @@ -0,0 +1,36 @@ +A decorator for registering the gradient function for an op type. + +This decorator is only used when defining a new op type. For an op +with `m` inputs and `n` outputs, the gradient function is a function +that takes the original `Operation` and `n` `Tensor` objects +(representing the gradients with respect to each output of the op), +and returns `m` `Tensor` objects (representing the partial gradients +with respect to each input of the op). + +For example, assuming that operations of type `"Sub"` take two +inputs `x` and `y`, and return a single output `x - y`, the +following gradient function would be registered: + +```python +@tf.RegisterGradient("Sub") +def _sub_grad(unused_op, grad): + return grad, tf.neg(grad) +``` + +The decorator argument `op_type` is the string type of an +operation. This corresponds to the `OpDef.name` field for the proto +that defines the operation. + +- - - + +#### `tf.RegisterGradient.__init__(op_type)` {#RegisterGradient.__init__} + +Creates a new decorator with `op_type` as the Operation type. + +##### Args: + + +* `op_type`: The string type of an operation. This corresponds to the + `OpDef.name` field for the proto that defines the operation. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.Tensor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.Tensor.md new file mode 100644 index 0000000000..73af134a7a --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.Tensor.md @@ -0,0 +1,228 @@ +Represents a value produced by an `Operation`. + +A `Tensor` is a symbolic handle to one of the outputs of an +`Operation`. It does not hold the values of that operation's output, +but instead provides a means of computing those values in a +TensorFlow [`Session`](../../api_docs/python/client.md#Session). + +This class has two primary purposes: + +1. A `Tensor` can be passed as an input to another `Operation`. + This builds a dataflow connection between operations, which + enables TensorFlow to execute an entire `Graph` that represents a + large, multi-step computation. + +2. After the graph has been launched in a session, the value of the + `Tensor` can be computed by passing it to + [`Session.run()`](../../api_docs/python/client.md#Session.run). + `t.eval()` is a shortcut for calling + `tf.get_default_session().run(t)`. + +In the following example, `c`, `d`, and `e` are symbolic `Tensor` +objects, whereas `result` is a numpy array that stores a concrete +value: + +```python +# Build a dataflow graph. +c = tf.constant([[1.0, 2.0], [3.0, 4.0]]) +d = tf.constant([[1.0, 1.0], [0.0, 1.0]]) +e = tf.matmul(c, d) + +# Construct a `Session` to execute the graph. +sess = tf.Session() + +# Execute the graph and store the value that `e` represents in `result`. +result = sess.run(e) +``` + +- - - + +#### `tf.Tensor.dtype` {#Tensor.dtype} + +The `DType` of elements in this tensor. + + +- - - + +#### `tf.Tensor.name` {#Tensor.name} + +The string name of this tensor. + + +- - - + +#### `tf.Tensor.value_index` {#Tensor.value_index} + +The index of this tensor in the outputs of its `Operation`. + + +- - - + +#### `tf.Tensor.graph` {#Tensor.graph} + +The `Graph` that contains this tensor. + + +- - - + +#### `tf.Tensor.op` {#Tensor.op} + +The `Operation` that produces this tensor as an output. + + +- - - + +#### `tf.Tensor.consumers()` {#Tensor.consumers} + +Returns a list of `Operation`s that consume this tensor. + +##### Returns: + + A list of `Operation`s. + + + +- - - + +#### `tf.Tensor.eval(feed_dict=None, session=None)` {#Tensor.eval} + +Evaluates this tensor in a `Session`. + +Calling this method will execute all preceding operations that +produce the inputs needed for the operation that produces this +tensor. + +*N.B.* Before invoking `Tensor.eval()`, its graph must have been +launched in a session, and either a default session must be +available, or `session` must be specified explicitly. + +##### Args: + + +* `feed_dict`: A dictionary that maps `Tensor` objects to feed values. + See [`Session.run()`](../../api_docs/python/client.md#Session.run) for a + description of the valid feed values. +* `session`: (Optional.) The `Session` to be used to evaluate this tensor. If + none, the default session will be used. + +##### Returns: + + A numpy array corresponding to the value of this tensor. + + + +- - - + +#### `tf.Tensor.get_shape()` {#Tensor.get_shape} + +Returns the `TensorShape` that represents the shape of this tensor. + +The shape is computed using shape inference functions that are +registered for each `Operation` type using `tf.RegisterShape`. +See [`TensorShape`](../../api_docs/python/framework.md#TensorShape) for more +details of what a shape represents. + +The inferred shape of a tensor is used to provide shape +information without having to launch the graph in a session. This +can be used for debugging, and providing early error messages. For +example: + +```python +c = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]) + +print(c.get_shape()) +==> TensorShape([Dimension(2), Dimension(3)]) + +d = tf.constant([[1.0, 0.0], [0.0, 1.0], [1.0, 0.0], [0.0, 1.0]]) + +print(d.get_shape()) +==> TensorShape([Dimension(4), Dimension(2)]) + +# Raises a ValueError, because `c` and `d` do not have compatible +# inner dimensions. +e = tf.matmul(c, d) + +f = tf.matmul(c, d, transpose_a=True, transpose_b=True) + +print(f.get_shape()) +==> TensorShape([Dimension(3), Dimension(4)]) +``` + +In some cases, the inferred shape may have unknown dimensions. If +the caller has additional information about the values of these +dimensions, `Tensor.set_shape()` can be used to augment the +inferred shape. + +##### Returns: + + A `TensorShape` representing the shape of this tensor. + + +- - - + +#### `tf.Tensor.set_shape(shape)` {#Tensor.set_shape} + +Updates the shape of this tensor. + +This method can be called multiple times, and will merge the given +`shape` with the current shape of this tensor. It can be used to +provide additional information about the shape of this tensor that +cannot be inferred from the graph alone. For example, this can be used +to provide additional information about the shapes of images: + +```python +_, image_data = tf.TFRecordReader(...).read(...) +image = tf.image.decode_png(image_data, channels=3) + +# The height and width dimensions of `image` are data dependent, and +# cannot be computed without executing the op. +print(image.get_shape()) +==> TensorShape([Dimension(None), Dimension(None), Dimension(3)]) + +# We know that each image in this dataset is 28 x 28 pixels. +image.set_shape([28, 28, 3]) +print(image.get_shape()) +==> TensorShape([Dimension(28), Dimension(28), Dimension(3)]) +``` + +##### Args: + + +* `shape`: A `TensorShape` representing the shape of this tensor. + +##### Raises: + + +* `ValueError`: If `shape` is not compatible with the current shape of + this tensor. + + + +#### Other Methods +- - - + +#### `tf.Tensor.__init__(op, value_index, dtype)` {#Tensor.__init__} + +Creates a new `Tensor`. + +##### Args: + + +* `op`: An `Operation`. `Operation` that computes this tensor. +* `value_index`: An `int`. Index of the operation's endpoint that produces + this tensor. +* `dtype`: A `DType`. Type of elements stored in this tensor. + +##### Raises: + + +* `TypeError`: If the op is not an `Operation`. + + +- - - + +#### `tf.Tensor.device` {#Tensor.device} + +The name of the device on which this tensor will be produced, or None. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.add_check_numerics_ops.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.add_check_numerics_ops.md deleted file mode 100644 index 9e72af79db..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.add_check_numerics_ops.md +++ /dev/null @@ -1,13 +0,0 @@ -### `tf.add_check_numerics_ops()` {#add_check_numerics_ops} - -Connect a `check_numerics` to every floating point tensor. - -`check_numerics` operations themselves are added for each `float` or `double` -tensor in the graph. For all ops in the graph, the `check_numerics` op for -all of its (`float` or `double`) inputs is guaranteed to run before the -`check_numerics` op on any of its outputs. - -##### Returns: - - A `group` op depending on all `check_numerics` ops added. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.add_to_collection.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.add_to_collection.md new file mode 100644 index 0000000000..1d8d752917 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.add_to_collection.md @@ -0,0 +1,14 @@ +### `tf.add_to_collection(name, value)` {#add_to_collection} + +Wrapper for `Graph.add_to_collection()` using the default graph. + +See [`Graph.add_to_collection()`](../../api_docs/python/framework.md#Graph.add_to_collection) +for more details. + +##### Args: + + +* `name`: The key for the collection. For example, the `GraphKeys` class + contains many standard names for collections. +* `value`: The value to add to the collection. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.argmax.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.argmax.md deleted file mode 100644 index af0a2270a9..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.argmax.md +++ /dev/null @@ -1,17 +0,0 @@ -### `tf.argmax(input, dimension, name=None)` {#argmax} - -Returns the index with the largest value across dimensions of a tensor. - -##### Args: - - -* `input`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`. -* `dimension`: A `Tensor` of type `int32`. - int32, 0 <= dimension < rank(input). Describes which dimension - of the input Tensor to reduce across. For vectors, use dimension = 0. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `int64`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.argmin.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.argmin.md deleted file mode 100644 index 002d5ed816..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.argmin.md +++ /dev/null @@ -1,17 +0,0 @@ -### `tf.argmin(input, dimension, name=None)` {#argmin} - -Returns the index with the smallest value across dimensions of a tensor. - -##### Args: - - -* `input`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`. -* `dimension`: A `Tensor` of type `int32`. - int32, 0 <= dimension < rank(input). Describes which dimension - of the input Tensor to reduce across. For vectors, use dimension = 0. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `int64`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.assert_integer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.assert_integer.md new file mode 100644 index 0000000000..c75ba58765 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.assert_integer.md @@ -0,0 +1,30 @@ +### `tf.assert_integer(x, data=None, summarize=None, name=None)` {#assert_integer} + +Assert that `x` is of integer dtype. + +Example of adding a dependency to an operation: + +```python +with tf.control_dependencies([tf.assert_integer(x)]): + output = tf.reduce_sum(x) +``` + +Example of adding dependency to the tensor being checked: + +```python +x = tf.with_dependencies([tf.assert_integer(x)], x) +``` + +##### Args: + + +* `x`: `Tensor` whose basetype is integer and is not quantized. +* `data`: The tensors to print out if the condition is False. Defaults to + error message and first few entries of `x`. +* `summarize`: Print this many entries of each tensor. +* `name`: A name for this operation (optional). Defaults to "assert_integer". + +##### Returns: + + Op that raises `InvalidArgumentError` if `x == y` is False. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.assert_less_equal.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.assert_less_equal.md deleted file mode 100644 index d740746a61..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.assert_less_equal.md +++ /dev/null @@ -1,35 +0,0 @@ -### `tf.assert_less_equal(x, y, data=None, summarize=None, name=None)` {#assert_less_equal} - -Assert the condition `x <= y` holds element-wise. - -Example of adding a dependency to an operation: - -```python -with tf.control_dependencies([tf.assert_less_equal(x, y)]): - output = tf.reduce_sum(x) -``` - -Example of adding dependency to the tensor being checked: - -```python -x = tf.with_dependencies([tf.assert_less_equal(x, y)], x) -``` - -This condition holds if for every pair of (possibly broadcast) elements -`x[i]`, `y[i]`, we have `x[i] <= y[i]`. -If both `x` and `y` are empty, this is trivially satisfied. - -##### Args: - - -* `x`: Numeric `Tensor`. -* `y`: Numeric `Tensor`, same dtype as and broadcastable to `x`. -* `data`: The tensors to print out if the condition is False. Defaults to - error message and first few entries of `x`, `y`. -* `summarize`: Print this many entries of each tensor. -* `name`: A name for this operation (optional). Defaults to "assert_less_equal" - -##### Returns: - - Op that raises `InvalidArgumentError` if `x <= y` is False. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.assert_non_negative.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.assert_non_negative.md new file mode 100644 index 0000000000..47f07a698a --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.assert_non_negative.md @@ -0,0 +1,34 @@ +### `tf.assert_non_negative(x, data=None, summarize=None, name=None)` {#assert_non_negative} + +Assert the condition `x >= 0` holds element-wise. + +Example of adding a dependency to an operation: + +```python +with tf.control_dependencies([tf.assert_non_negative(x)]): + output = tf.reduce_sum(x) +``` + +Example of adding dependency to the tensor being checked: + +```python +x = tf.with_dependencies([tf.assert_non_negative(x)], x) +``` + +Non-negative means, for every element `x[i]` of `x`, we have `x[i] >= 0`. +If `x` is empty this is trivially satisfied. + +##### Args: + + +* `x`: Numeric `Tensor`. +* `data`: The tensors to print out if the condition is False. Defaults to + error message and first few entries of `x`. +* `summarize`: Print this many entries of each tensor. +* `name`: A name for this operation (optional). + Defaults to "assert_non_negative". + +##### Returns: + + Op raising `InvalidArgumentError` unless `x` is all non-negative. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.assert_proper_iterable.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.assert_proper_iterable.md deleted file mode 100644 index ba01073765..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.assert_proper_iterable.md +++ /dev/null @@ -1,18 +0,0 @@ -### `tf.assert_proper_iterable(values)` {#assert_proper_iterable} - -Static assert that values is a "proper" iterable. - -`Ops` that expect iterables of `Tensor` can call this to validate input. -Useful since `Tensor`, `ndarray`, byte/text type are all iterables themselves. - -##### Args: - - -* `values`: Object to be checked. - -##### Raises: - - -* `TypeError`: If `values` is not iterable or is one of - `Tensor`, `SparseTensor`, `np.array`, `tf.compat.bytes_or_text_types`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.assert_type.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.assert_type.md deleted file mode 100644 index e98b9dc4af..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.assert_type.md +++ /dev/null @@ -1,15 +0,0 @@ -### `tf.assert_type(tensor, tf_type)` {#assert_type} - -Asserts that the given `Tensor` is of the specified type. - -##### Args: - - -* `tensor`: A tensorflow `Tensor`. -* `tf_type`: A tensorflow type (dtypes.float32, tf.int64, dtypes.bool, etc). - -##### Raises: - - -* `ValueError`: If the tensors data type doesn't match tf_type. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.audio_summary.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.audio_summary.md deleted file mode 100644 index a592378b88..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.audio_summary.md +++ /dev/null @@ -1,35 +0,0 @@ -### `tf.audio_summary(tag, tensor, sample_rate, max_outputs=3, collections=None, name=None)` {#audio_summary} - -Outputs a `Summary` protocol buffer with audio. - -The summary has up to `max_outputs` summary values containing audio. The -audio is built from `tensor` which must be 3-D with shape `[batch_size, -frames, channels]` or 2-D with shape `[batch_size, frames]`. The values are -assumed to be in the range of `[-1.0, 1.0]` with a sample rate of -`sample_rate`. - -The `tag` argument is a scalar `Tensor` of type `string`. It is used to -build the `tag` of the summary values: - -* If `max_outputs` is 1, the summary value tag is '*tag*/audio'. -* If `max_outputs` is greater than 1, the summary value tags are - generated sequentially as '*tag*/audio/0', '*tag*/audio/1', etc. - -##### Args: - - -* `tag`: A scalar `Tensor` of type `string`. Used to build the `tag` - of the summary values. -* `tensor`: A 3-D `float32` `Tensor` of shape `[batch_size, frames, channels]` - or a 2-D `float32` `Tensor` of shape `[batch_size, frames]`. -* `sample_rate`: The sample rate of the signal in hertz. -* `max_outputs`: Max number of batch elements to generate audio for. -* `collections`: Optional list of ops.GraphKeys. The collections to add the - summary to. Defaults to [ops.GraphKeys.SUMMARIES] -* `name`: A name for the operation (optional). - -##### Returns: - - A scalar `Tensor` of type `string`. The serialized `Summary` protocol - buffer. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.batch_ifft2d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.batch_ifft2d.md deleted file mode 100644 index 4476637122..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.batch_ifft2d.md +++ /dev/null @@ -1,18 +0,0 @@ -### `tf.batch_ifft2d(input, name=None)` {#batch_ifft2d} - -Compute the inverse 2-dimensional discrete Fourier Transform over the inner-most - -2 dimensions of `input`. - -##### Args: - - -* `input`: A `Tensor` of type `complex64`. A complex64 tensor. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `complex64`. - A complex64 tensor of the same shape as `input`. The inner-most 2 - dimensions of `input` are replaced with their inverse 2D Fourier Transform. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.batch_matrix_inverse.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.batch_matrix_inverse.md new file mode 100644 index 0000000000..231056a05c --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.batch_matrix_inverse.md @@ -0,0 +1,28 @@ +### `tf.batch_matrix_inverse(input, adjoint=None, name=None)` {#batch_matrix_inverse} + +Calculates the inverse of square invertible matrices or their adjoints + +(conjugate transposes). + +The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions +form square matrices. The output is a tensor of the same shape as the input +containing the inverse for all input submatrices `[..., :, :]`. + +The op uses LU decomposition with partial pivoting to compute the inverses. + +If a matrix is not invertible there is no guarantee what the op does. It +may detect the condition and raise an exception or it may simply return a +garbage result. + +##### Args: + + +* `input`: A `Tensor`. Must be one of the following types: `float64`, `float32`. + Shape is `[..., M, M]`. +* `adjoint`: An optional `bool`. Defaults to `False`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `input`. Shape is `[..., M, M]`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.batch_to_space.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.batch_to_space.md new file mode 100644 index 0000000000..d4a66ac8e0 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.batch_to_space.md @@ -0,0 +1,37 @@ +### `tf.batch_to_space(input, crops, block_size, name=None)` {#batch_to_space} + +BatchToSpace for 4-D tensors of type T. + +Rearranges (permutes) data from batch into blocks of spatial data, followed by +cropping. This is the reverse transformation of SpaceToBatch. More specifically, +this op outputs a copy of the input tensor where values from the `batch` +dimension are moved in spatial blocks to the `height` and `width` dimensions, +followed by cropping along the `height` and `width` dimensions. + +##### Args: + + +* `input`: A `Tensor`. 4-D tensor with shape + `[batch*block_size*block_size, height_pad/block_size, width_pad/block_size, + depth]`. Note that the batch size of the input tensor must be divisible by + `block_size * block_size`. +* `crops`: A `Tensor` of type `int32`. + 2-D tensor of non-negative integers with shape `[2, 2]`. It specifies + how many elements to crop from the intermediate result across the spatial + dimensions as follows: + + crops = [[crop_top, crop_bottom], [crop_left, crop_right]] + +* `block_size`: An `int`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `input`. + 4-D with shape `[batch, height, width, depth]`, where: + + height = height_pad - crop_top - crop_bottom + width = width_pad - crop_left - crop_right + + The attr `block_size` must be greater than one. It indicates the block size. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.ceil.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.ceil.md new file mode 100644 index 0000000000..34e4a7feed --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.ceil.md @@ -0,0 +1,14 @@ +### `tf.ceil(x, name=None)` {#ceil} + +Returns element-wise smallest integer in not less than x. + +##### Args: + + +* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `x`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.check_numerics.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.check_numerics.md new file mode 100644 index 0000000000..46a8f6f7db --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.check_numerics.md @@ -0,0 +1,18 @@ +### `tf.check_numerics(tensor, message, name=None)` {#check_numerics} + +Checks a tensor for NaN and Inf values. + +When run, reports an `InvalidArgument` error if `tensor` has any values +that are not a number (NaN) or infinity (Inf). Otherwise, passes `tensor` as-is. + +##### Args: + + +* `tensor`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`. +* `message`: A `string`. Prefix of the error message. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `tensor`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.clip_by_average_norm.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.clip_by_average_norm.md new file mode 100644 index 0000000000..4598e183d8 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.clip_by_average_norm.md @@ -0,0 +1,29 @@ +### `tf.clip_by_average_norm(t, clip_norm, name=None)` {#clip_by_average_norm} + +Clips tensor values to a maximum average L2-norm. + +Given a tensor `t`, and a maximum clip value `clip_norm`, this operation +normalizes `t` so that its average L2-norm is less than or equal to +`clip_norm`. Specifically, if the average L2-norm is already less than or +equal to `clip_norm`, then `t` is not modified. If the average L2-norm is +greater than `clip_norm`, then this operation returns a tensor of the same +type and shape as `t` with its values set to: + +`t * clip_norm / l2norm_avg(t)` + +In this case, the average L2-norm of the output tensor is `clip_norm`. + +This operation is typically used to clip gradients before applying them with +an optimizer. + +##### Args: + + +* `t`: A `Tensor`. +* `clip_norm`: A 0-D (scalar) `Tensor` > 0. A maximum clipping value. +* `name`: A name for the operation (optional). + +##### Returns: + + A clipped `Tensor`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.clip_by_global_norm.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.clip_by_global_norm.md deleted file mode 100644 index a40f621bf4..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.clip_by_global_norm.md +++ /dev/null @@ -1,50 +0,0 @@ -### `tf.clip_by_global_norm(t_list, clip_norm, use_norm=None, name=None)` {#clip_by_global_norm} - -Clips values of multiple tensors by the ratio of the sum of their norms. - -Given a tuple or list of tensors `t_list`, and a clipping ratio `clip_norm`, -this operation returns a list of clipped tensors `list_clipped` -and the global norm (`global_norm`) of all tensors in `t_list`. Optionally, -if you've already computed the global norm for `t_list`, you can specify -the global norm with `use_norm`. - -To perform the clipping, the values `t_list[i]` are set to: - - t_list[i] * clip_norm / max(global_norm, clip_norm) - -where: - - global_norm = sqrt(sum([l2norm(t)**2 for t in t_list])) - -If `clip_norm > global_norm` then the entries in `t_list` remain as they are, -otherwise they're all shrunk by the global ratio. - -Any of the entries of `t_list` that are of type `None` are ignored. - -This is the correct way to perform gradient clipping (for example, see -[Pascanu et al., 2012](http://arxiv.org/abs/1211.5063) -([pdf](http://arxiv.org/pdf/1211.5063.pdf))). - -However, it is slower than `clip_by_norm()` because all the parameters must be -ready before the clipping operation can be performed. - -##### Args: - - -* `t_list`: A tuple or list of mixed `Tensors`, `IndexedSlices`, or None. -* `clip_norm`: A 0-D (scalar) `Tensor` > 0. The clipping ratio. -* `use_norm`: A 0-D (scalar) `Tensor` of type `float` (optional). The global - norm to use. If not provided, `global_norm()` is used to compute the norm. -* `name`: A name for the operation (optional). - -##### Returns: - - -* `list_clipped`: A list of `Tensors` of the same type as `list_t`. -* `global_norm`: A 0-D (scalar) `Tensor` representing the global norm. - -##### Raises: - - -* `TypeError`: If `t_list` is not a sequence. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.clip_by_value.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.clip_by_value.md new file mode 100644 index 0000000000..7cd7e0311e --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.clip_by_value.md @@ -0,0 +1,21 @@ +### `tf.clip_by_value(t, clip_value_min, clip_value_max, name=None)` {#clip_by_value} + +Clips tensor values to a specified min and max. + +Given a tensor `t`, this operation returns a tensor of the same type and +shape as `t` with its values clipped to `clip_value_min` and `clip_value_max`. +Any values less than `clip_value_min` are set to `clip_value_min`. Any values +greater than `clip_value_max` are set to `clip_value_max`. + +##### Args: + + +* `t`: A `Tensor`. +* `clip_value_min`: A 0-D (scalar) `Tensor`. The minimum value to clip by. +* `clip_value_max`: A 0-D (scalar) `Tensor`. The maximum value to clip by. +* `name`: A name for the operation (optional). + +##### Returns: + + A clipped `Tensor`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.conj.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.conj.md deleted file mode 100644 index 6df004b0cd..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.conj.md +++ /dev/null @@ -1,28 +0,0 @@ -### `tf.conj(input, name=None)` {#conj} - -Returns the complex conjugate of a complex number. - -Given a tensor `input` of complex numbers, this operation returns a tensor of -complex numbers that are the complex conjugate of each element in `input`. The -complex numbers in `input` must be of the form \\(a + bj\\), where *a* is the -real part and *b* is the imaginary part. - -The complex conjugate returned by this operation is of the form \\(a - bj\\). - -For example: - -``` -# tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j] -tf.conj(input) ==> [-2.25 - 4.75j, 3.25 - 5.75j] -``` - -##### Args: - - -* `input`: A `Tensor`. Must be one of the following types: `complex64`, `complex128`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `input`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.copy_graph.copy_op_to_graph.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.copy_graph.copy_op_to_graph.md deleted file mode 100644 index d549132fa2..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.copy_graph.copy_op_to_graph.md +++ /dev/null @@ -1,29 +0,0 @@ -### `tf.contrib.copy_graph.copy_op_to_graph(org_instance, to_graph, variables, scope='')` {#copy_op_to_graph} - -Given an `Operation` 'org_instance` from one `Graph`, -initializes and returns a copy of it from another `Graph`, -under the specified scope (default `""`). - -The copying is done recursively, so any `Operation` whose output -is required to evaluate the `org_instance`, is also copied (unless -already done). - -Since `Variable` instances are copied separately, those required -to evaluate `org_instance` must be provided as input. - -Args: -org_instance: An `Operation` from some `Graph`. Could be a - `Placeholder` as well. -to_graph: The `Graph` to copy `org_instance` to. -variables: An iterable of `Variable` instances to copy `org_instance` to. -scope: A scope for the new `Variable` (default `""`). - -##### Returns: - - The copied `Operation` from `to_graph`. - -##### Raises: - - -* `TypeError`: If `org_instance` is not an `Operation` or `Tensor`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.BaseDistribution.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.BaseDistribution.md new file mode 100644 index 0000000000..65b516af08 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.BaseDistribution.md @@ -0,0 +1,195 @@ +Abstract base class for probability distributions. + +This class, along with `ContinuousDistribution` and `DiscreteDistribution`, +defines the API for probability distributions. + +Users will never instantiate a `BaseDistribution`, but will instead +instantiate subclasses of either `ContinuousDistribution` or +`DiscreteDistribution`. + +Developers of new distributions should prefer to subclass +`ContinuousDistribution` or `DiscreteDistribution`. + +### API + +The key methods for probability distributions are defined here. The likelihood +functions (`pdf`, `log_pdf`) and (`pmf`, `log_pmf`) are defined in +`ContinuousDistribution` and `DiscreteDistribution`, respectively. + +To keep ops generated by the distribution tied together by name, subclasses +should override `name` and use it to preprend names of ops in other methods +(see `cdf` for an example). + +Subclasses that wish to support `cdf` and `log_cdf` can override `log_cdf` +and use the base class's implementation for `cdf`. + +### Broadcasting, batching, and shapes + +All distributions support batches of independent distributions of that type. +The batch shape is determined by broadcasting together the parameters. + +The shape of arguments to `__init__`, `cdf`, `log_cdf`, and the likelihood +functions defined in `ContinuousDistribution` and `DiscreteDistribution` +reflect this broadcasting, as does the return value of `sample`. + +`sample_shape = (n,) + batch_shape + event_shape`, where `sample_shape` is the +shape of the `Tensor` returned from `sample`, `n` is the number of samples, +`batch_shape` defines how many independent distributions there are, and +`event_shape` defines the shape of samples from each of those independent +distributions. Samples are independent along the `batch_shape` dimensions, +but not necessarily so along the `event_shape` dimensions (dependending on +the particulars of the underlying distribution). + +Using the `Uniform` distribution as an example: + +```python +minval = 3.0 +maxval = [[4.0, 6.0], + [10.0, 12.0]] + +# Broadcasting: +# This instance represents 4 Uniform distributions. Each has a lower bound at +# 3.0 as the `minval` parameter was broadcasted to match `maxval`'s shape. +u = Uniform(minval, maxval) + +# `event_shape` is `TensorShape([])`. +event_shape = u.get_event_shape() +# `event_shape_t` is a `Tensor` which will evaluate to a scalar 1. +event_shape_t = u.event_shape + +# Sampling returns a sample per distribution. `samples` has shape +# (5, 2, 2), which is (n,) + batch_shape + event_shape, where n=5, +# batch_shape=(2, 2), and event_shape=(). +samples = u.sample(5) + +# The broadcasting holds across methods. Here we use `cdf` as an example. The +# same holds for `log_cdf` and the likelihood functions. + +# `cum_prob` has shape (2, 2) as the `value` argument was broadcasted to the +# shape of the `Uniform` instance. +cum_prob_broadcast = u.cdf(4.0) + +# `cum_prob`'s shape is (2, 2), one per distribution. No broadcasting +# occurred. +cum_prob_per_dist = u.cdf([[4.0, 5.0], + [6.0, 7.0]]) + +# INVALID as the `value` argument is not broadcastable to the distribution's +# shape. +cum_prob_invalid = u.cdf([4.0, 5.0, 6.0]) +``` +- - - + +#### `tf.contrib.distributions.BaseDistribution.batch_shape(name=None)` {#BaseDistribution.batch_shape} + +Batch dimensions of this instance as a 1-D int32 `Tensor`. + +The product of the dimensions of the `batch_shape` is the number of +independent distributions of this kind the instance represents. + +##### Args: + + +* `name`: name to give to the op + +##### Returns: + + `Tensor` `batch_shape` + + +- - - + +#### `tf.contrib.distributions.BaseDistribution.cdf(value, name='cdf')` {#BaseDistribution.cdf} + +Cumulative distribution function. + + +- - - + +#### `tf.contrib.distributions.BaseDistribution.dtype` {#BaseDistribution.dtype} + +dtype of samples from this distribution. + + +- - - + +#### `tf.contrib.distributions.BaseDistribution.entropy(name=None)` {#BaseDistribution.entropy} + +Entropy of the distribution in nats. + + +- - - + +#### `tf.contrib.distributions.BaseDistribution.event_shape(name=None)` {#BaseDistribution.event_shape} + +Shape of a sample from a single distribution as a 1-D int32 `Tensor`. + +##### Args: + + +* `name`: name to give to the op + +##### Returns: + + `Tensor` `event_shape` + + +- - - + +#### `tf.contrib.distributions.BaseDistribution.get_batch_shape()` {#BaseDistribution.get_batch_shape} + +`TensorShape` available at graph construction time. + +Same meaning as `batch_shape`. May be only partially defined. + + +- - - + +#### `tf.contrib.distributions.BaseDistribution.get_event_shape()` {#BaseDistribution.get_event_shape} + +`TensorShape` available at graph construction time. + +Same meaning as `event_shape`. May be only partially defined. + + +- - - + +#### `tf.contrib.distributions.BaseDistribution.log_cdf(value, name='log_cdf')` {#BaseDistribution.log_cdf} + +Log CDF. + + +- - - + +#### `tf.contrib.distributions.BaseDistribution.mean` {#BaseDistribution.mean} + + + + +- - - + +#### `tf.contrib.distributions.BaseDistribution.name` {#BaseDistribution.name} + +Name to prepend to all ops. + + +- - - + +#### `tf.contrib.distributions.BaseDistribution.sample(n, seed=None, name=None)` {#BaseDistribution.sample} + +Generate `n` samples. + +##### Args: + + +* `n`: scalar. Number of samples to draw from each distribution. +* `seed`: Python integer seed for RNG +* `name`: name to give to the op. + +##### Returns: + + +* `samples`: a `Tensor` of shape `(n,) + self.batch_shape + self.event_shape` + with values of type `self.dtype`. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.ContinuousDistribution.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.ContinuousDistribution.md new file mode 100644 index 0000000000..e474870cd4 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.ContinuousDistribution.md @@ -0,0 +1,153 @@ +Base class for continuous probability distributions. + +`ContinuousDistribution` defines the API for the likelihood functions `pdf` +and `log_pdf` of continuous probability distributions, and a property +`is_reparameterized` (returning `True` or `False`) which describes +whether the samples of this distribution are calculated in a differentiable +way from a non-parameterized distribution. For example, the `Normal` +distribution with parameters `mu` and `sigma` is reparameterized as + +```Normal(mu, sigma) = sigma * Normal(0, 1) + mu``` + +Subclasses must override `pdf` and `log_pdf` but one can call this base +class's implementation. They must also override the `is_reparameterized` +property. + +See `BaseDistribution` for more information on the API for probability +distributions. +- - - + +#### `tf.contrib.distributions.ContinuousDistribution.batch_shape(name=None)` {#ContinuousDistribution.batch_shape} + +Batch dimensions of this instance as a 1-D int32 `Tensor`. + +The product of the dimensions of the `batch_shape` is the number of +independent distributions of this kind the instance represents. + +##### Args: + + +* `name`: name to give to the op + +##### Returns: + + `Tensor` `batch_shape` + + +- - - + +#### `tf.contrib.distributions.ContinuousDistribution.cdf(value, name='cdf')` {#ContinuousDistribution.cdf} + +Cumulative distribution function. + + +- - - + +#### `tf.contrib.distributions.ContinuousDistribution.dtype` {#ContinuousDistribution.dtype} + +dtype of samples from this distribution. + + +- - - + +#### `tf.contrib.distributions.ContinuousDistribution.entropy(name=None)` {#ContinuousDistribution.entropy} + +Entropy of the distribution in nats. + + +- - - + +#### `tf.contrib.distributions.ContinuousDistribution.event_shape(name=None)` {#ContinuousDistribution.event_shape} + +Shape of a sample from a single distribution as a 1-D int32 `Tensor`. + +##### Args: + + +* `name`: name to give to the op + +##### Returns: + + `Tensor` `event_shape` + + +- - - + +#### `tf.contrib.distributions.ContinuousDistribution.get_batch_shape()` {#ContinuousDistribution.get_batch_shape} + +`TensorShape` available at graph construction time. + +Same meaning as `batch_shape`. May be only partially defined. + + +- - - + +#### `tf.contrib.distributions.ContinuousDistribution.get_event_shape()` {#ContinuousDistribution.get_event_shape} + +`TensorShape` available at graph construction time. + +Same meaning as `event_shape`. May be only partially defined. + + +- - - + +#### `tf.contrib.distributions.ContinuousDistribution.is_reparameterized` {#ContinuousDistribution.is_reparameterized} + + + + +- - - + +#### `tf.contrib.distributions.ContinuousDistribution.log_cdf(value, name='log_cdf')` {#ContinuousDistribution.log_cdf} + +Log CDF. + + +- - - + +#### `tf.contrib.distributions.ContinuousDistribution.log_pdf(value, name='log_pdf')` {#ContinuousDistribution.log_pdf} + +Log of the probability density function. + + +- - - + +#### `tf.contrib.distributions.ContinuousDistribution.mean` {#ContinuousDistribution.mean} + + + + +- - - + +#### `tf.contrib.distributions.ContinuousDistribution.name` {#ContinuousDistribution.name} + +Name to prepend to all ops. + + +- - - + +#### `tf.contrib.distributions.ContinuousDistribution.pdf(value, name='pdf')` {#ContinuousDistribution.pdf} + +Probability density function. + + +- - - + +#### `tf.contrib.distributions.ContinuousDistribution.sample(n, seed=None, name=None)` {#ContinuousDistribution.sample} + +Generate `n` samples. + +##### Args: + + +* `n`: scalar. Number of samples to draw from each distribution. +* `seed`: Python integer seed for RNG +* `name`: name to give to the op. + +##### Returns: + + +* `samples`: a `Tensor` of shape `(n,) + self.batch_shape + self.event_shape` + with values of type `self.dtype`. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.StudentT.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.StudentT.md new file mode 100644 index 0000000000..816e5d5a83 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.StudentT.md @@ -0,0 +1,245 @@ +Student's t distribution with degree-of-freedom parameter df. + +#### Mathematical details + +The PDF of this distribution is: + +`f(t) = gamma((df+1)/2)/sqrt(df*pi)/gamma(df/2)*(1+t^2/df)^(-(df+1)/2)` + +#### Examples + +Examples of initialization of one or a batch of distributions. + +```python +# Define a single scalar Student t distribution. +single_dist = tf.contrib.distributions.StudentT(df=3) + +# Evaluate the pdf at 1, returning a scalar Tensor. +single_dist.pdf(1.) + +# Define a batch of two scalar valued Student t's. +# The first has degrees of freedom 2, mean 1, and scale 11. +# The second 3, 2 and 22. +multi_dist = tf.contrib.distributions.StudentT(df=[2, 3], + mu=[1, 2.], + sigma=[11, 22.]) + +# Evaluate the pdf of the first distribution on 0, and the second on 1.5, +# returning a length two tensor. +multi_dist.pdf([0, 1.5]) + +# Get 3 samples, returning a 3 x 2 tensor. +multi_dist.sample(3) +``` + +Arguments are broadcast when possible. + +```python +# Define a batch of two Student's t distributions. +# Both have df 2 and mean 1, but different scales. +dist = tf.contrib.distributions.StudentT(df=2, mu=1, sigma=[11, 22.]) + +# Evaluate the pdf of both distributions on the same point, 3.0, +# returning a length 2 tensor. +dist.pdf(3.0) +``` +- - - + +#### `tf.contrib.distributions.StudentT.__init__(df, mu, sigma, name='StudentT')` {#StudentT.__init__} + +Construct Student's t distributions. + +The distributions have degree of freedom `df`, mean `mu`, and scale `sigma`. + +The parameters `df`, `mu`, and `sigma` must be shaped in a way that supports +broadcasting (e.g. `df + mu + sigma` is a valid operation). + +##### Args: + + +* `df`: `float` or `double` tensor, the degrees of freedom of the + distribution(s). `df` must contain only positive values. +* `mu`: `float` or `double` tensor, the means of the distribution(s). +* `sigma`: `float` or `double` tensor, the scaling factor for the + distribution(s). `sigma` must contain only positive values. + Note that `sigma` is not the standard deviation of this distribution. +* `name`: The name to give Ops created by the initializer. + +##### Raises: + + +* `TypeError`: if mu and sigma are different dtypes. + + +- - - + +#### `tf.contrib.distributions.StudentT.batch_shape(name='batch_shape')` {#StudentT.batch_shape} + + + + +- - - + +#### `tf.contrib.distributions.StudentT.cdf(value, name='cdf')` {#StudentT.cdf} + +Cumulative distribution function. + + +- - - + +#### `tf.contrib.distributions.StudentT.df` {#StudentT.df} + +Degrees of freedom in these Student's t distribution(s). + + +- - - + +#### `tf.contrib.distributions.StudentT.dtype` {#StudentT.dtype} + + + + +- - - + +#### `tf.contrib.distributions.StudentT.entropy(name='entropy')` {#StudentT.entropy} + +The entropy of Student t distribution(s). + +##### Args: + + +* `name`: The name to give this op. + +##### Returns: + + +* `entropy`: tensor of dtype `dtype`, the entropy. + + +- - - + +#### `tf.contrib.distributions.StudentT.event_shape(name='event_shape')` {#StudentT.event_shape} + + + + +- - - + +#### `tf.contrib.distributions.StudentT.get_batch_shape()` {#StudentT.get_batch_shape} + + + + +- - - + +#### `tf.contrib.distributions.StudentT.get_event_shape()` {#StudentT.get_event_shape} + + + + +- - - + +#### `tf.contrib.distributions.StudentT.is_reparameterized` {#StudentT.is_reparameterized} + + + + +- - - + +#### `tf.contrib.distributions.StudentT.log_cdf(value, name='log_cdf')` {#StudentT.log_cdf} + +Log CDF. + + +- - - + +#### `tf.contrib.distributions.StudentT.log_pdf(x, name='log_pdf')` {#StudentT.log_pdf} + +Log pdf of observations in `x` under these Student's t-distribution(s). + +##### Args: + + +* `x`: tensor of dtype `dtype`, must be broadcastable with `mu` and `df`. +* `name`: The name to give this op. + +##### Returns: + + +* `log_pdf`: tensor of dtype `dtype`, the log-PDFs of `x`. + + +- - - + +#### `tf.contrib.distributions.StudentT.mean` {#StudentT.mean} + + + + +- - - + +#### `tf.contrib.distributions.StudentT.mu` {#StudentT.mu} + +Locations of these Student's t distribution(s). + + +- - - + +#### `tf.contrib.distributions.StudentT.name` {#StudentT.name} + + + + +- - - + +#### `tf.contrib.distributions.StudentT.pdf(x, name='pdf')` {#StudentT.pdf} + +The PDF of observations in `x` under these Student's t distribution(s). + +##### Args: + + +* `x`: tensor of dtype `dtype`, must be broadcastable with `df`, `mu`, and + `sigma`. +* `name`: The name to give this op. + +##### Returns: + + +* `pdf`: tensor of dtype `dtype`, the pdf values of `x`. + + +- - - + +#### `tf.contrib.distributions.StudentT.sample(n, seed=None, name='sample')` {#StudentT.sample} + +Sample `n` observations from the Student t Distributions. + +##### Args: + + +* `n`: `Scalar`, type int32, the number of observations to sample. +* `seed`: Python integer, the random seed. +* `name`: The name to give this op. + +##### Returns: + + +* `samples`: a `Tensor` of shape `(n,) + self.batch_shape + self.event_shape` + with values of type `self.dtype`. + + +- - - + +#### `tf.contrib.distributions.StudentT.sigma` {#StudentT.sigma} + +Scaling factors of these Student's t distribution(s). + + +- - - + +#### `tf.contrib.distributions.StudentT.variance` {#StudentT.variance} + + + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.normal_congugates_known_sigma_predictive.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.normal_congugates_known_sigma_predictive.md deleted file mode 100644 index 89e4e5ca3c..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.normal_congugates_known_sigma_predictive.md +++ /dev/null @@ -1,55 +0,0 @@ -### `tf.contrib.distributions.normal_congugates_known_sigma_predictive(prior, sigma, s, n)` {#normal_congugates_known_sigma_predictive} - -Posterior predictive Normal distribution w. conjugate prior on the mean. - -This model assumes that `n` observations (with sum `s`) come from a -Normal with unknown mean `mu` (described by the Normal `prior`) -and known variance `sigma^2`. The "known sigma predictive" -is the distribution of new observations, conditioned on the existing -observations and our prior. - -Accepts a prior Normal distribution object, having parameters -`mu0` and `sigma0`, as well as known `sigma` values of the predictive -distribution(s) (also assumed Normal), -and statistical estimates `s` (the sum(s) of the observations) and -`n` (the number(s) of observations). - -Calculates the Normal distribution(s) `p(x | sigma^2)`: - -``` - p(x | sigma^2) = int N(x | mu, sigma^2) N(mu | prior.mu, prior.sigma^2) dmu - = N(x | prior.mu, 1/(sigma^2 + prior.sigma^2)) -``` - -Returns the predictive posterior distribution object, with parameters -`(mu', sigma'^2)`, where: - -``` -sigma_n^2 = 1/(1/sigma0^2 + n/sigma^2), -mu' = (mu0/sigma0^2 + s/sigma^2) * sigma_n^2. -sigma'^2 = sigma_n^2 + sigma^2, -``` - -Distribution parameters from `prior`, as well as `sigma`, `s`, and `n`. -will broadcast in the case of multidimensional sets of parameters. - -##### Args: - - -* `prior`: `Normal` object of type `dtype`: - the prior distribution having parameters `(mu0, sigma0)`. -* `sigma`: tensor of type `dtype`, taking values `sigma > 0`. - The known stddev parameter(s). -* `s`: Tensor of type `dtype`. The sum(s) of observations. -* `n`: Tensor of type `int`. The number(s) of observations. - -##### Returns: - - A new Normal predictive distribution object. - -##### Raises: - - -* `TypeError`: if dtype of `s` does not match `dtype`, or `prior` is not a - Normal object. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.fully_connected.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.fully_connected.md new file mode 100644 index 0000000000..da63a14cd9 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.fully_connected.md @@ -0,0 +1,46 @@ +### `tf.contrib.layers.fully_connected(*args, **kwargs)` {#fully_connected} + +Adds a fully connected layer. + +`fully_connected` creates a variable called `weights`, representing a fully +connected weight matrix, which is multiplied by the `inputs` to produce a +`Tensor` of hidden units. If a `normalizer_fn` is provided (such as +`batch_norm`), it is then applied. Otherwise, if `normalizer_fn` is +None and a `biases_initializer` is provided then a `biases` variable would be +created and added the hidden units. Finally, if `activation_fn` is not `None`, +it is applied to the hidden units as well. + +Note: that if `inputs` have a rank greater than 2, then `inputs` is flattened +prior to the initial matrix multiply by `weights`. + +##### Args: + + +* `inputs`: A tensor of with at least rank 2 and value for the last dimension, + i.e. `[batch_size, depth]`, `[None, None, None, channels]`. +* `num_outputs`: Integer, the number of output units in the layer. +* `activation_fn`: activation function. +* `normalizer_fn`: normalization function to use instead of `biases`. If + `normalize_fn` is provided then `biases_initializer` and + `biases_regularizer` are ignored and `biases` are not created nor added. +* `normalizer_params`: normalization function parameters. +* `weights_initializer`: An initializer for the weights. +* `weights_regularizer`: Optional regularizer for the weights. +* `biases_initializer`: An initializer for the biases. If None skip biases. +* `biases_regularizer`: Optional regularizer for the biases. +* `reuse`: whether or not the layer and its variables should be reused. To be + able to reuse the layer scope must be given. +* `variables_collections`: Optional list of collections for all the variables or + a dictionay containing a different list of collection per variable. +* `outputs_collections`: collection to add the outputs. +* `scope`: Optional scope for variable_op_scope. + +##### Returns: + + the tensor variable representing the result of the series of operations. + +##### Raises: + + +* `ValueError`: if x has rank less than 2 or if its last dimension is not set. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.l1_regularizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.l1_regularizer.md deleted file mode 100644 index 1aa8074980..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.l1_regularizer.md +++ /dev/null @@ -1,22 +0,0 @@ -### `tf.contrib.layers.l1_regularizer(scale)` {#l1_regularizer} - -Returns a function that can be used to apply L1 regularization to weights. - -L1 regularization encourages sparsity. - -##### Args: - - -* `scale`: A scalar multiplier `Tensor`. 0.0 disables the regularizer. - -##### Returns: - - A function with signature `l1(weights, name=None)` that apply L1 - regularization. - -##### Raises: - - -* `ValueError`: If scale is outside of the range [0.0, 1.0] or if scale is not a - float. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.summarize_activation.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.summarize_activation.md deleted file mode 100644 index 3aed0ff43c..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.summarize_activation.md +++ /dev/null @@ -1,16 +0,0 @@ -### `tf.contrib.layers.summarize_activation(op)` {#summarize_activation} - -Summarize an activation. - -This applies the given activation and adds useful summaries specific to the -activation. - -##### Args: - - -* `op`: The tensor to summarize (assumed to be a layer activation). - -##### Returns: - - The summary op created to summarize `op`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.summarize_activations.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.summarize_activations.md deleted file mode 100644 index dc2e7a6044..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.summarize_activations.md +++ /dev/null @@ -1,4 +0,0 @@ -### `tf.contrib.layers.summarize_activations(name_filter=None, summarizer=summarize_activation)` {#summarize_activations} - -Summarize activations, using `summarize_activation` to summarize. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.learn.NanLossDuringTrainingError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.learn.NanLossDuringTrainingError.md deleted file mode 100644 index 8b13789179..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.learn.NanLossDuringTrainingError.md +++ /dev/null @@ -1 +0,0 @@ - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.learn.TensorFlowDNNClassifier.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.learn.TensorFlowDNNClassifier.md deleted file mode 100644 index 03c779259a..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.learn.TensorFlowDNNClassifier.md +++ /dev/null @@ -1,302 +0,0 @@ -TensorFlow DNN Classifier model. - -Parameters: - hidden_units: List of hidden units per layer. - n_classes: Number of classes in the target. - batch_size: Mini batch size. - steps: Number of steps to run over data. - optimizer: Optimizer name (or class), for example "SGD", "Adam", "Adagrad". - learning_rate: If this is constant float value, no decay function is used. - Instead, a customized decay function can be passed that accepts - global_step as parameter and returns a Tensor. - e.g. exponential decay function: - def exp_decay(global_step): - return tf.train.exponential_decay( - learning_rate=0.1, global_step, - decay_steps=2, decay_rate=0.001) - class_weight: None or list of n_classes floats. Weight associated with - classes for loss computation. If not given, all classes are - supposed to have weight one. - continue_training: when continue_training is True, once initialized - model will be continuely trained on every call of fit. - config: RunConfig object that controls the configurations of the - session, e.g. num_cores, gpu_memory_fraction, etc. - dropout: When not None, the probability we will drop out a given coordinate. -- - - - -#### `tf.contrib.learn.TensorFlowDNNClassifier.__init__(hidden_units, n_classes, batch_size=32, steps=200, optimizer='Adagrad', learning_rate=0.1, class_weight=None, clip_gradients=5.0, continue_training=False, config=None, verbose=1, dropout=None)` {#TensorFlowDNNClassifier.__init__} - - - - -- - - - -#### `tf.contrib.learn.TensorFlowDNNClassifier.bias_` {#TensorFlowDNNClassifier.bias_} - -Returns bias of the DNN's bias layers. - - -- - - - -#### `tf.contrib.learn.TensorFlowDNNClassifier.evaluate(x=None, y=None, input_fn=None, steps=None)` {#TensorFlowDNNClassifier.evaluate} - -See base class. - - -- - - - -#### `tf.contrib.learn.TensorFlowDNNClassifier.fit(x, y, steps=None, monitors=None, logdir=None)` {#TensorFlowDNNClassifier.fit} - -Builds a neural network model given provided `model_fn` and training -data X and y. - -Note: called first time constructs the graph and initializers -variables. Consecutives times it will continue training the same model. -This logic follows partial_fit() interface in scikit-learn. - -To restart learning, create new estimator. - -##### Args: - - -* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be - iterator that returns arrays of features. The training input - samples for fitting the model. - -* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be - iterator that returns array of targets. The training target values - (class labels in classification, real numbers in regression). - -* `steps`: int, number of steps to train. - If None or 0, train for `self.steps`. -* `monitors`: List of `BaseMonitor` objects to print training progress and - invoke early stopping. -* `logdir`: the directory to save the log file that can be used for - optional visualization. - -##### Returns: - - Returns self. - - -- - - - -#### `tf.contrib.learn.TensorFlowDNNClassifier.get_params(deep=True)` {#TensorFlowDNNClassifier.get_params} - -Get parameters for this estimator. - -##### Args: - - -* `deep`: boolean, optional - If True, will return the parameters for this estimator and - contained subobjects that are estimators. - -##### Returns: - - params : mapping of string to any - Parameter names mapped to their values. - - -- - - - -#### `tf.contrib.learn.TensorFlowDNNClassifier.get_tensor(name)` {#TensorFlowDNNClassifier.get_tensor} - -Returns tensor by name. - -##### Args: - - -* `name`: string, name of the tensor. - -##### Returns: - - Tensor. - - -- - - - -#### `tf.contrib.learn.TensorFlowDNNClassifier.get_tensor_value(name)` {#TensorFlowDNNClassifier.get_tensor_value} - -Returns value of the tensor give by name. - -##### Args: - - -* `name`: string, name of the tensor. - -##### Returns: - - Numpy array - value of the tensor. - - -- - - - -#### `tf.contrib.learn.TensorFlowDNNClassifier.get_variable_names()` {#TensorFlowDNNClassifier.get_variable_names} - -Returns list of all variable names in this model. - -##### Returns: - - List of names. - - -- - - - -#### `tf.contrib.learn.TensorFlowDNNClassifier.model_dir` {#TensorFlowDNNClassifier.model_dir} - - - - -- - - - -#### `tf.contrib.learn.TensorFlowDNNClassifier.partial_fit(x, y)` {#TensorFlowDNNClassifier.partial_fit} - -Incremental fit on a batch of samples. - -This method is expected to be called several times consecutively -on different or the same chunks of the dataset. This either can -implement iterative training or out-of-core/online training. - -This is especially useful when the whole dataset is too big to -fit in memory at the same time. Or when model is taking long time -to converge, and you want to split up training into subparts. - -##### Args: - - -* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be - iterator that returns arrays of features. The training input - samples for fitting the model. - -* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be - iterator that returns array of targets. The training target values - (class label in classification, real numbers in regression). - -##### Returns: - - Returns self. - - -- - - - -#### `tf.contrib.learn.TensorFlowDNNClassifier.predict(x, axis=1, batch_size=None)` {#TensorFlowDNNClassifier.predict} - -Predict class or regression for X. - -For a classification model, the predicted class for each sample in X is -returned. For a regression model, the predicted value based on X is -returned. - -##### Args: - - -* `x`: array-like matrix, [n_samples, n_features...] or iterator. -* `axis`: Which axis to argmax for classification. - By default axis 1 (next after batch) is used. - Use 2 for sequence predictions. -* `batch_size`: If test set is too big, use batch size to split - it into mini batches. By default the batch_size member - variable is used. - -##### Returns: - - -* `y`: array of shape [n_samples]. The predicted classes or predicted - value. - - -- - - - -#### `tf.contrib.learn.TensorFlowDNNClassifier.predict_proba(x, batch_size=None)` {#TensorFlowDNNClassifier.predict_proba} - -Predict class probability of the input samples X. - -##### Args: - - -* `x`: array-like matrix, [n_samples, n_features...] or iterator. -* `batch_size`: If test set is too big, use batch size to split - it into mini batches. By default the batch_size member variable is used. - -##### Returns: - - -* `y`: array of shape [n_samples, n_classes]. The predicted - probabilities for each class. - - -- - - - -#### `tf.contrib.learn.TensorFlowDNNClassifier.restore(cls, path, config=None)` {#TensorFlowDNNClassifier.restore} - -Restores model from give path. - -##### Args: - - -* `path`: Path to the checkpoints and other model information. -* `config`: RunConfig object that controls the configurations of the session, - e.g. num_cores, gpu_memory_fraction, etc. This is allowed to be - reconfigured. - -##### Returns: - - Estimator, object of the subclass of TensorFlowEstimator. - - -- - - - -#### `tf.contrib.learn.TensorFlowDNNClassifier.save(path)` {#TensorFlowDNNClassifier.save} - -Saves checkpoints and graph to given path. - -##### Args: - - -* `path`: Folder to save model to. - - -- - - - -#### `tf.contrib.learn.TensorFlowDNNClassifier.set_params(**params)` {#TensorFlowDNNClassifier.set_params} - -Set the parameters of this estimator. - -The method works on simple estimators as well as on nested objects -(such as pipelines). The former have parameters of the form -``__`` so that it's possible to update each -component of a nested object. - -##### Returns: - - self - - -- - - - -#### `tf.contrib.learn.TensorFlowDNNClassifier.train(input_fn, steps, monitors=None)` {#TensorFlowDNNClassifier.train} - -Trains a model given input builder function. - -##### Args: - - -* `input_fn`: Input builder function, returns tuple of dicts or - dict and Tensor. -* `steps`: number of steps to train model for. -* `monitors`: List of `BaseMonitor` subclass instances. Used for callbacks - inside the training loop. - -##### Returns: - - Returns self. - - -- - - - -#### `tf.contrib.learn.TensorFlowDNNClassifier.weights_` {#TensorFlowDNNClassifier.weights_} - -Returns weights of the DNN weight layers. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.learn.TensorFlowLinearClassifier.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.learn.TensorFlowLinearClassifier.md deleted file mode 100644 index 469aa72b3a..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.learn.TensorFlowLinearClassifier.md +++ /dev/null @@ -1,279 +0,0 @@ -TensorFlow Linear Classifier model. -- - - - -#### `tf.contrib.learn.TensorFlowLinearClassifier.__init__(n_classes, batch_size=32, steps=200, optimizer='Adagrad', learning_rate=0.1, class_weight=None, clip_gradients=5.0, continue_training=False, config=None, verbose=1)` {#TensorFlowLinearClassifier.__init__} - - - - -- - - - -#### `tf.contrib.learn.TensorFlowLinearClassifier.bias_` {#TensorFlowLinearClassifier.bias_} - -Returns weights of the linear classifier. - - -- - - - -#### `tf.contrib.learn.TensorFlowLinearClassifier.evaluate(x=None, y=None, input_fn=None, steps=None)` {#TensorFlowLinearClassifier.evaluate} - -See base class. - - -- - - - -#### `tf.contrib.learn.TensorFlowLinearClassifier.fit(x, y, steps=None, monitors=None, logdir=None)` {#TensorFlowLinearClassifier.fit} - -Builds a neural network model given provided `model_fn` and training -data X and y. - -Note: called first time constructs the graph and initializers -variables. Consecutives times it will continue training the same model. -This logic follows partial_fit() interface in scikit-learn. - -To restart learning, create new estimator. - -##### Args: - - -* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be - iterator that returns arrays of features. The training input - samples for fitting the model. - -* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be - iterator that returns array of targets. The training target values - (class labels in classification, real numbers in regression). - -* `steps`: int, number of steps to train. - If None or 0, train for `self.steps`. -* `monitors`: List of `BaseMonitor` objects to print training progress and - invoke early stopping. -* `logdir`: the directory to save the log file that can be used for - optional visualization. - -##### Returns: - - Returns self. - - -- - - - -#### `tf.contrib.learn.TensorFlowLinearClassifier.get_params(deep=True)` {#TensorFlowLinearClassifier.get_params} - -Get parameters for this estimator. - -##### Args: - - -* `deep`: boolean, optional - If True, will return the parameters for this estimator and - contained subobjects that are estimators. - -##### Returns: - - params : mapping of string to any - Parameter names mapped to their values. - - -- - - - -#### `tf.contrib.learn.TensorFlowLinearClassifier.get_tensor(name)` {#TensorFlowLinearClassifier.get_tensor} - -Returns tensor by name. - -##### Args: - - -* `name`: string, name of the tensor. - -##### Returns: - - Tensor. - - -- - - - -#### `tf.contrib.learn.TensorFlowLinearClassifier.get_tensor_value(name)` {#TensorFlowLinearClassifier.get_tensor_value} - -Returns value of the tensor give by name. - -##### Args: - - -* `name`: string, name of the tensor. - -##### Returns: - - Numpy array - value of the tensor. - - -- - - - -#### `tf.contrib.learn.TensorFlowLinearClassifier.get_variable_names()` {#TensorFlowLinearClassifier.get_variable_names} - -Returns list of all variable names in this model. - -##### Returns: - - List of names. - - -- - - - -#### `tf.contrib.learn.TensorFlowLinearClassifier.model_dir` {#TensorFlowLinearClassifier.model_dir} - - - - -- - - - -#### `tf.contrib.learn.TensorFlowLinearClassifier.partial_fit(x, y)` {#TensorFlowLinearClassifier.partial_fit} - -Incremental fit on a batch of samples. - -This method is expected to be called several times consecutively -on different or the same chunks of the dataset. This either can -implement iterative training or out-of-core/online training. - -This is especially useful when the whole dataset is too big to -fit in memory at the same time. Or when model is taking long time -to converge, and you want to split up training into subparts. - -##### Args: - - -* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be - iterator that returns arrays of features. The training input - samples for fitting the model. - -* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be - iterator that returns array of targets. The training target values - (class label in classification, real numbers in regression). - -##### Returns: - - Returns self. - - -- - - - -#### `tf.contrib.learn.TensorFlowLinearClassifier.predict(x, axis=1, batch_size=None)` {#TensorFlowLinearClassifier.predict} - -Predict class or regression for X. - -For a classification model, the predicted class for each sample in X is -returned. For a regression model, the predicted value based on X is -returned. - -##### Args: - - -* `x`: array-like matrix, [n_samples, n_features...] or iterator. -* `axis`: Which axis to argmax for classification. - By default axis 1 (next after batch) is used. - Use 2 for sequence predictions. -* `batch_size`: If test set is too big, use batch size to split - it into mini batches. By default the batch_size member - variable is used. - -##### Returns: - - -* `y`: array of shape [n_samples]. The predicted classes or predicted - value. - - -- - - - -#### `tf.contrib.learn.TensorFlowLinearClassifier.predict_proba(x, batch_size=None)` {#TensorFlowLinearClassifier.predict_proba} - -Predict class probability of the input samples X. - -##### Args: - - -* `x`: array-like matrix, [n_samples, n_features...] or iterator. -* `batch_size`: If test set is too big, use batch size to split - it into mini batches. By default the batch_size member variable is used. - -##### Returns: - - -* `y`: array of shape [n_samples, n_classes]. The predicted - probabilities for each class. - - -- - - - -#### `tf.contrib.learn.TensorFlowLinearClassifier.restore(cls, path, config=None)` {#TensorFlowLinearClassifier.restore} - -Restores model from give path. - -##### Args: - - -* `path`: Path to the checkpoints and other model information. -* `config`: RunConfig object that controls the configurations of the session, - e.g. num_cores, gpu_memory_fraction, etc. This is allowed to be - reconfigured. - -##### Returns: - - Estimator, object of the subclass of TensorFlowEstimator. - - -- - - - -#### `tf.contrib.learn.TensorFlowLinearClassifier.save(path)` {#TensorFlowLinearClassifier.save} - -Saves checkpoints and graph to given path. - -##### Args: - - -* `path`: Folder to save model to. - - -- - - - -#### `tf.contrib.learn.TensorFlowLinearClassifier.set_params(**params)` {#TensorFlowLinearClassifier.set_params} - -Set the parameters of this estimator. - -The method works on simple estimators as well as on nested objects -(such as pipelines). The former have parameters of the form -``__`` so that it's possible to update each -component of a nested object. - -##### Returns: - - self - - -- - - - -#### `tf.contrib.learn.TensorFlowLinearClassifier.train(input_fn, steps, monitors=None)` {#TensorFlowLinearClassifier.train} - -Trains a model given input builder function. - -##### Args: - - -* `input_fn`: Input builder function, returns tuple of dicts or - dict and Tensor. -* `steps`: number of steps to train model for. -* `monitors`: List of `BaseMonitor` subclass instances. Used for callbacks - inside the training loop. - -##### Returns: - - Returns self. - - -- - - - -#### `tf.contrib.learn.TensorFlowLinearClassifier.weights_` {#TensorFlowLinearClassifier.weights_} - -Returns weights of the linear classifier. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.learn.run_n.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.learn.run_n.md deleted file mode 100644 index 8fa8f09cb5..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.learn.run_n.md +++ /dev/null @@ -1,19 +0,0 @@ -### `tf.contrib.learn.run_n(output_dict, feed_dict=None, restore_checkpoint_path=None, n=1)` {#run_n} - -Run `output_dict` tensors `n` times, with the same `feed_dict` each run. - -##### Args: - - -* `output_dict`: A `dict` mapping string names to tensors to run. Must all be - from the same graph. -* `feed_dict`: `dict` of input values to feed each run. -* `restore_checkpoint_path`: A string containing the path to a checkpoint to - restore. -* `n`: Number of times to repeat. - -##### Returns: - - A list of `n` `dict` objects, each containing values read from `output_dict` - tensors. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.learn.train.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.learn.train.md deleted file mode 100644 index 65057636ce..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.learn.train.md +++ /dev/null @@ -1,62 +0,0 @@ -### `tf.contrib.learn.train(graph, output_dir, train_op, loss_op, global_step_tensor=None, init_op=None, init_feed_dict=None, init_fn=None, log_every_steps=10, supervisor_is_chief=True, supervisor_master='', supervisor_save_model_secs=600, supervisor_save_summaries_steps=100, feed_fn=None, max_steps=None, fail_on_nan_loss=True, monitors=None)` {#train} - -Train a model. - -Given `graph`, a directory to write outputs to (`output_dir`), and some ops, -run a training loop. The given `train_op` performs one step of training on the -model. The `loss_op` represents the objective function of the training. It is -expected to increment the `global_step_tensor`, a scalar integer tensor -counting training steps. This function uses `Supervisor` to initialize the -graph (from a checkpoint if one is available in `output_dir`), write summaries -defined in the graph, and write regular checkpoints as defined by -`supervisor_save_model_secs`. - -Training continues until `global_step_tensor` evaluates to `max_steps`, or, if -`fail_on_nan_loss`, until `loss_op` evaluates to `NaN`. In that case the -program is terminated with exit code 1. - -##### Args: - - -* `graph`: A graph to train. It is expected that this graph is not in use - elsewhere. -* `output_dir`: A directory to write outputs to. -* `train_op`: An op that performs one training step when run. -* `loss_op`: A scalar loss tensor. -* `global_step_tensor`: A tensor representing the global step. If none is given, - one is extracted from the graph using the same logic as in `Supervisor`. -* `init_op`: An op that initializes the graph. If `None`, use `Supervisor`'s - default. -* `init_feed_dict`: A dictionary that maps `Tensor` objects to feed values. - This feed dictionary will be used when `init_op` is evaluated. -* `init_fn`: Optional callable passed to Supervisor to initialize the model. -* `log_every_steps`: Output logs regularly. The logs contain timing data and the - current loss. -* `supervisor_is_chief`: Whether the current process is the chief supervisor in - charge of restoring the model and running standard services. -* `supervisor_master`: The master string to use when preparing the session. -* `supervisor_save_model_secs`: Save a checkpoint every - `supervisor_save_model_secs` seconds when training. -* `supervisor_save_summaries_steps`: Save summaries every - `supervisor_save_summaries_steps` seconds when training. -* `feed_fn`: A function that is called every iteration to produce a `feed_dict` - passed to `session.run` calls. Optional. -* `max_steps`: Train until `global_step_tensor` evaluates to this value. -* `fail_on_nan_loss`: If true, raise `NanLossDuringTrainingError` if `loss_op` - evaluates to `NaN`. If false, continue training as if nothing happened. -* `monitors`: List of `BaseMonitor` subclass instances. Used for callbacks - inside the training loop. - -##### Returns: - - The final loss value. - -##### Raises: - - -* `ValueError`: If `global_step_tensor` is not provided. See - `tf.contrib.framework.get_global_step` for how we look it up if not - provided explicitly. -* `NanLossDuringTrainingError`: If `fail_on_nan_loss` is `True`, and loss ever - evaluates to `NaN`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.confusion_matrix.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.confusion_matrix.md deleted file mode 100644 index a57fa44318..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.confusion_matrix.md +++ /dev/null @@ -1,45 +0,0 @@ -### `tf.contrib.metrics.confusion_matrix(predictions, labels, num_classes=None, name=None)` {#confusion_matrix} - -Computes the confusion matrix from predictions and labels - -Calculate the Confusion Matrix for a pair of prediction and -label 1-D int arrays. - -Considering a prediction array such as: `[1, 2, 3]` -And a label array such as: `[2, 2, 3]` - -##### The confusion matrix returned would be the following one: - - [[0, 0, 0] - [0, 1, 0] - [0, 1, 0] - [0, 0, 1]] - -Where the matrix rows represent the prediction labels and the columns -represents the real labels. The confusion matrix is always a 2-D array -of shape [n, n], where n is the number of valid labels for a given -classification task. Both prediction and labels must be 1-D arrays of -the same shape in order for this function to work. - -##### Args: - - -* `predictions`: A 1-D array represeting the predictions for a given - classification. -* `labels`: A 1-D represeting the real labels for the classification task. -* `num_classes`: The possible number of labels the classification task can - have. If this value is not provided, it will be calculated - using both predictions and labels array. -* `name`: Scope name. - -##### Returns: - - A l X l matrix represeting the confusion matrix, where l in the number of - possible labels in the classification task. - -##### Raises: - - -* `ValueError`: If both predictions and labels are not 1-D vectors and do not - have the same size. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.set_intersection.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.set_intersection.md deleted file mode 100644 index bd42f3fa01..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.set_intersection.md +++ /dev/null @@ -1,23 +0,0 @@ -### `tf.contrib.metrics.set_intersection(a, b, validate_indices=True)` {#set_intersection} - -Compute set intersection of elements in last dimension of `a` and `b`. - -All but the last dimension of `a` and `b` must match. - -##### Args: - - -* `a`: `Tensor` or `SparseTensor` of the same type as `b`. If sparse, indices - must be sorted in row-major order. -* `b`: `Tensor` or `SparseTensor` of the same type as `a`. Must be - `SparseTensor` if `a` is `SparseTensor`. If sparse, indices must be - sorted in row-major order. -* `validate_indices`: Whether to validate the order and range of sparse indices - in `a` and `b`. - -##### Returns: - - A `SparseTensor` with the same rank as `a` and `b`, and all but the last - dimension the same. Elements along the last dimension contain the - intersections. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.streaming_percentage_less.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.streaming_percentage_less.md deleted file mode 100644 index 40ddae4d31..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.streaming_percentage_less.md +++ /dev/null @@ -1,47 +0,0 @@ -### `tf.contrib.metrics.streaming_percentage_less(values, threshold, ignore_mask=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_percentage_less} - -Computes the percentage of values less than the given threshold. - -The `streaming_percentage_less` function creates two local variables, -`total` and `count` that are used to compute the percentage of `values` that -fall below `threshold`. This rate is ultimately returned as `percentage` -which is an idempotent operation that simply divides `total` by `count. -To facilitate the estimation of the percentage of values that fall under -`threshold` over multiple batches of data, the function creates an -`update_op` operation whose behavior is dependent on the value of -`ignore_mask`. If `ignore_mask` is None, then `update_op` -increments `total` with the number of elements of `values` that are less -than `threshold` and `count` with the number of elements in `values`. If -`ignore_mask` is not `None`, then `update_op` increments `total` with the -number of elements of `values` that are less than `threshold` and whose -corresponding entries in `ignore_mask` are False, and `count` is incremented -with the number of elements of `ignore_mask` that are False. - -##### Args: - - -* `values`: A numeric `Tensor` of arbitrary size. -* `threshold`: A scalar threshold. -* `ignore_mask`: An optional mask of the same shape as 'values' which indicates - which elements to ignore during metric computation. -* `metrics_collections`: An optional list of collections that the metric - value variable should be added to. -* `updates_collections`: An optional list of collections that the metric update - ops should be added to. -* `name`: An optional variable_op_scope name. - -##### Returns: - - -* `percentage`: A tensor representing the current mean, the value of `total` - divided by `count`. -* `update_op`: An operation that increments the `total` and `count` variables - appropriately. - -##### Raises: - - -* `ValueError`: If `ignore_mask` is not None and its shape doesn't match `values - or if either `metrics_collections` or `updates_collections` are supplied - but are not a list or tuple. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.streaming_precision.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.streaming_precision.md deleted file mode 100644 index 77ddaead32..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.streaming_precision.md +++ /dev/null @@ -1,50 +0,0 @@ -### `tf.contrib.metrics.streaming_precision(predictions, labels, ignore_mask=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_precision} - -Computes the precision of the predictions with respect to the labels. - -The `streaming_precision` function creates two local variables, -`true_positives` and `false_positives`, that are used to compute the -precision. This value is ultimately returned as `precision`, an idempotent -operation that simply divides `true_positives` by the sum of `true_positives` -and `false_positives`. To facilitate the calculation of the precision over a -stream of data, the function creates an `update_op` operation whose behavior -is dependent on the value of `ignore_mask`. If `ignore_mask` is None, then -`update_op` increments `true_positives` with the number of elements of -`predictions` and `labels` that are both `True` and increments -`false_positives` with the number of elements of `predictions` that are `True` -whose corresponding `labels` element is `False`. If `ignore_mask` is not -`None`, then the increments for `true_positives` and `false_positives` are -only computed using elements of `predictions` and `labels` whose corresponding -values in `ignore_mask` are `False`. In addition to performing the updates, -`update_op` also returns the value of `precision`. - -##### Args: - - -* `predictions`: The predicted values, a binary `Tensor` of arbitrary shape. -* `labels`: The ground truth values, a binary `Tensor` whose dimensions must - match `predictions`. -* `ignore_mask`: An optional, binary tensor whose size matches `predictions`. -* `metrics_collections`: An optional list of collections that `precision` should - be added to. -* `updates_collections`: An optional list of collections that `update_op` should - be added to. -* `name`: An optional variable_op_scope name. - -##### Returns: - - -* `precision`: Scalar float `Tensor` with the value of `true_positives` - divided by the sum of `true_positives` and `false_positives`. -* `update_op`: `Operation` that increments `true_positives` and - `false_positives` variables appropriately and whose value matches - `precision`. - -##### Raises: - - -* `ValueError`: If the dimensions of `predictions` and `labels` don't match or - if `ignore_mask` is not `None` and its shape doesn't match `predictions` - or if either `metrics_collections` or `updates_collections` are not a list - or tuple. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.streaming_sparse_precision_at_k.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.streaming_sparse_precision_at_k.md new file mode 100644 index 0000000000..ad24dd742a --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.streaming_sparse_precision_at_k.md @@ -0,0 +1,60 @@ +### `tf.contrib.metrics.streaming_sparse_precision_at_k(predictions, labels, k, class_id=None, ignore_mask=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_sparse_precision_at_k} + +Computes precision@k of the predictions with respect to sparse labels. + +If `class_id` is specified, we calculate precision by considering only the + entries in the batch for which `class_id` is in the top-k highest + `predictions`, and computing the fraction of them for which `class_id` is + indeed a correct label. +If `class_id` is not specified, we'll calculate precision as how often on + average a class among the top-k classes with the highest predicted values + of a batch entry is correct and can be found in the label for that entry. + +`streaming_sparse_precision_at_k` creates two local variables, +`true_positive_at_` and `false_positive_at_`, that are used to compute +the precision@k frequency. This frequency is ultimately returned as +`recall_at_`: an idempotent operation that simply divides +`true_positive_at_` by total (`true_positive_at_` + `recall_at_`). To +facilitate the estimation of precision@k over a stream of data, the function +utilizes three steps. +* A `top_k` operation computes a tensor whose elements indicate the top `k` + predictions of the `predictions` `Tensor`. +* Set operations are applied to `top_k` and `labels` to calculate true + positives and false positives. +* An `update_op` operation increments `true_positive_at_` and + `false_positive_at_`. It also returns the recall value. + +##### Args: + + +* `predictions`: Float `Tensor` with shape [D1, ... DN, num_classes] where + N >= 1. Commonly, N=1 and predictions has shape [batch size, num_classes]. + The final dimension contains the logit values for each class. [D1, ... DN] + must match `labels`. +* `labels`: `int64` `Tensor` or `SparseTensor` with shape + [D1, ... DN, num_labels], where N >= 1 and num_labels is the number of + target classes for the associated prediction. Commonly, N=1 and `labels` + has shape [batch_size, num_labels]. [D1, ... DN] must match + `predictions_idx`. Values should be in range [0, num_classes], where + num_classes is the last dimension of `predictions`. +* `k`: Integer, k for @k metric. +* `class_id`: Integer class ID for which we want binary metrics. This should be + in range [0, num_classes], where num_classes is the last dimension of + `predictions`. +* `ignore_mask`: An optional, binary tensor whose shape is broadcastable to the + the first [D1, ... DN] dimensions of `predictions_idx` and `labels`. +* `metrics_collections`: An optional list of collections that values should + be added to. +* `updates_collections`: An optional list of collections that updates should + be added to. +* `name`: Name of new update operation, and namespace for other dependant ops. + +##### Returns: + + +* `precision`: Scalar `float64` `Tensor` with the value of `true_positives` + divided by the sum of `true_positives` and `false_positives`. +* `update_op`: `Operation` that increments `true_positives` and + `false_positives` variables appropriately, and whose value matches + `precision`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.streaming_sparse_recall_at_k.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.streaming_sparse_recall_at_k.md new file mode 100644 index 0000000000..d09b288089 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.streaming_sparse_recall_at_k.md @@ -0,0 +1,59 @@ +### `tf.contrib.metrics.streaming_sparse_recall_at_k(predictions, labels, k, class_id=None, ignore_mask=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_sparse_recall_at_k} + +Computes recall@k of the predictions with respect to sparse labels. + +If `class_id` is specified, we calculate recall by considering only the + entries in the batch for which `class_id` is in the label, and computing + the fraction of them for which `class_id` is in the top-k `predictions`. +If `class_id` is not specified, we'll calculate recall as how often on + average a class among the labels of a batch entry is in the top-k + `predictions`. + +`streaming_sparse_recall_at_k` creates two local variables, +`true_positive_at_` and `false_negative_at_`, that are used to compute +the recall_at_k frequency. This frequency is ultimately returned as +`recall_at_`: an idempotent operation that simply divides +`true_positive_at_` by total (`true_positive_at_` + `recall_at_`). To +facilitate the estimation of recall@k over a stream of data, the function +utilizes three steps. +* A `top_k` operation computes a tensor whose elements indicate the top `k` + predictions of the `predictions` `Tensor`. +* Set operations are applied to `top_k` and `labels` to calculate true + positives and false negatives. +* An `update_op` operation increments `true_positive_at_` and + `false_negative_at_`. It also returns the recall value. + +##### Args: + + +* `predictions`: Float `Tensor` with shape [D1, ... DN, num_classes] where + N >= 1. Commonly, N=1 and predictions has shape [batch size, num_classes]. + The final dimension contains the logit values for each class. [D1, ... DN] + must match `labels`. +* `labels`: `int64` `Tensor` or `SparseTensor` with shape + [D1, ... DN, num_labels], where N >= 1 and num_labels is the number of + target classes for the associated prediction. Commonly, N=1 and `labels` + has shape [batch_size, num_labels]. [D1, ... DN] must match `labels`. + Values should be in range [0, num_classes], where num_classes is the last + dimension of `predictions`. +* `k`: Integer, k for @k metric. +* `class_id`: Integer class ID for which we want binary metrics. This should be + in range [0, num_classes], where num_classes is the last dimension of + `predictions`. +* `ignore_mask`: An optional, binary tensor whose shape is broadcastable to the + the first [D1, ... DN] dimensions of `predictions_idx` and `labels`. +* `metrics_collections`: An optional list of collections that values should + be added to. +* `updates_collections`: An optional list of collections that updates should + be added to. +* `name`: Name of new update operation, and namespace for other dependant ops. + +##### Returns: + + +* `recall`: Scalar `float64` `Tensor` with the value of `true_positives` divided + by the sum of `true_positives` and `false_negatives`. +* `update_op`: `Operation` that increments `true_positives` and + `false_negatives` variables appropriately, and whose value matches + `recall`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.util.make_tensor_proto.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.util.make_tensor_proto.md deleted file mode 100644 index f84a59be49..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.util.make_tensor_proto.md +++ /dev/null @@ -1,44 +0,0 @@ -### `tf.contrib.util.make_tensor_proto(values, dtype=None, shape=None)` {#make_tensor_proto} - -Create a TensorProto. - -##### Args: - - -* `values`: Values to put in the TensorProto. -* `dtype`: Optional tensor_pb2 DataType value. -* `shape`: List of integers representing the dimensions of tensor. - -##### Returns: - - A TensorProto. Depending on the type, it may contain data in the - "tensor_content" attribute, which is not directly useful to Python programs. - To access the values you should convert the proto back to a numpy ndarray - with tensor_util.MakeNdarray(proto). - -##### Raises: - - -* `TypeError`: if unsupported types are provided. -* `ValueError`: if arguments have inappropriate values. - -make_tensor_proto accepts "values" of a python scalar, a python list, a -numpy ndarray, or a numpy scalar. - -If "values" is a python scalar or a python list, make_tensor_proto -first convert it to numpy ndarray. If dtype is None, the -conversion tries its best to infer the right numpy data -type. Otherwise, the resulting numpy array has a compatible data -type with the given dtype. - -In either case above, the numpy ndarray (either the caller provided -or the auto converted) must have the compatible type with dtype. - -make_tensor_proto then converts the numpy array to a tensor proto. - -If "shape" is None, the resulting tensor proto represents the numpy -array precisely. - -Otherwise, "shape" specifies the tensor's shape and the numpy array -can not have more elements than what "shape" specifies. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.control_dependencies.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.control_dependencies.md new file mode 100644 index 0000000000..070f8788e5 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.control_dependencies.md @@ -0,0 +1,20 @@ +### `tf.control_dependencies(control_inputs)` {#control_dependencies} + +Wrapper for `Graph.control_dependencies()` using the default graph. + +See [`Graph.control_dependencies()`](../../api_docs/python/framework.md#Graph.control_dependencies) +for more details. + +##### Args: + + +* `control_inputs`: A list of `Operation` or `Tensor` objects which + must be executed or computed before running the operations + defined in the context. Can also be `None` to clear the control + dependencies. + +##### Returns: + + A context manager that specifies control dependencies for all + operations constructed within the context. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.convert_to_tensor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.convert_to_tensor.md deleted file mode 100644 index 29902ed467..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.convert_to_tensor.md +++ /dev/null @@ -1,47 +0,0 @@ -### `tf.convert_to_tensor(value, dtype=None, name=None, as_ref=False)` {#convert_to_tensor} - -Converts the given `value` to a `Tensor`. - -This function converts Python objects of various types to `Tensor` -objects. It accepts `Tensor` objects, numpy arrays, Python lists, -and Python scalars. For example: - -```python -import numpy as np - -def my_func(arg): - arg = tf.convert_to_tensor(arg, dtype=tf.float32) - return tf.matmul(arg, arg) + arg - -# The following calls are equivalent. -value_1 = my_func(tf.constant([[1.0, 2.0], [3.0, 4.0]])) -value_2 = my_func([[1.0, 2.0], [3.0, 4.0]]) -value_3 = my_func(np.array([[1.0, 2.0], [3.0, 4.0]], dtype=np.float32)) -``` - -This function can be useful when composing a new operation in Python -(such as `my_func` in the example above). All standard Python op -constructors apply this function to each of their Tensor-valued -inputs, which allows those ops to accept numpy arrays, Python lists, -and scalars in addition to `Tensor` objects. - -##### Args: - - -* `value`: An object whose type has a registered `Tensor` conversion function. -* `dtype`: Optional element type for the returned tensor. If missing, the - type is inferred from the type of `value`. -* `name`: Optional name to use if a new `Tensor` is created. -* `as_ref`: True if we want the result as a ref tensor. Only used if a new - `Tensor` is created. - -##### Returns: - - A `Tensor` based on `value`. - -##### Raises: - - -* `TypeError`: If no conversion function is registered for `value`. -* `RuntimeError`: If a registered conversion function returns an invalid value. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.count_up_to.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.count_up_to.md deleted file mode 100644 index da86e52f07..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.count_up_to.md +++ /dev/null @@ -1,23 +0,0 @@ -### `tf.count_up_to(ref, limit, name=None)` {#count_up_to} - -Increments 'ref' until it reaches 'limit'. - -This operation outputs "ref" after the update is done. This makes it -easier to chain operations that need to use the updated value. - -##### Args: - - -* `ref`: A mutable `Tensor`. Must be one of the following types: `int32`, `int64`. - Should be from a scalar `Variable` node. -* `limit`: An `int`. - If incrementing ref would bring it above limit, instead generates an - 'OutOfRange' error. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `ref`. - A copy of the input before increment. If nothing else modifies the - input, the values produced will all be distinct. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.device.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.device.md deleted file mode 100644 index 2a5e33203d..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.device.md +++ /dev/null @@ -1,19 +0,0 @@ -### `tf.device(device_name_or_function)` {#device} - -Wrapper for `Graph.device()` using the default graph. - -See -[`Graph.device()`](../../api_docs/python/framework.md#Graph.device) -for more details. - -##### Args: - - -* `device_name_or_function`: The device name or function to use in - the context. - -##### Returns: - - A context manager that specifies the default device to use for newly - created ops. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.diag.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.diag.md new file mode 100644 index 0000000000..94eb6a6717 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.diag.md @@ -0,0 +1,33 @@ +### `tf.diag(diagonal, name=None)` {#diag} + +Returns a diagonal tensor with a given diagonal values. + +Given a `diagonal`, this operation returns a tensor with the `diagonal` and +everything else padded with zeros. The diagonal is computed as follows: + +Assume `diagonal` has dimensions [D1,..., Dk], then the output is a tensor of +rank 2k with dimensions [D1,..., Dk, D1,..., Dk] where: + +`output[i1,..., ik, i1,..., ik] = diagonal[i1, ..., ik]` and 0 everywhere else. + +For example: + +```prettyprint +# 'diagonal' is [1, 2, 3, 4] +tf.diag(diagonal) ==> [[1, 0, 0, 0] + [0, 2, 0, 0] + [0, 0, 3, 0] + [0, 0, 0, 4]] +``` + +##### Args: + + +* `diagonal`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `complex64`. + Rank k tensor where k is at most 3. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `diagonal`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.diag_part.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.diag_part.md deleted file mode 100644 index 249eb80e50..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.diag_part.md +++ /dev/null @@ -1,34 +0,0 @@ -### `tf.diag_part(input, name=None)` {#diag_part} - -Returns the diagonal part of the tensor. - -This operation returns a tensor with the `diagonal` part -of the `input`. The `diagonal` part is computed as follows: - -Assume `input` has dimensions `[D1,..., Dk, D1,..., Dk]`, then the output is a -tensor of rank `k` with dimensions `[D1,..., Dk]` where: - -`diagonal[i1,..., ik] = input[i1, ..., ik, i1,..., ik]`. - -For example: - -```prettyprint -# 'input' is [[1, 0, 0, 0] - [0, 2, 0, 0] - [0, 0, 3, 0] - [0, 0, 0, 4]] - -tf.diag_part(input) ==> [1, 2, 3, 4] -``` - -##### Args: - - -* `input`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `complex64`. - Rank k tensor where k is 2, 4, or 6. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `input`. The extracted diagonal. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.errors.AlreadyExistsError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.errors.AlreadyExistsError.md deleted file mode 100644 index 85425df298..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.errors.AlreadyExistsError.md +++ /dev/null @@ -1,14 +0,0 @@ -Raised when an entity that we attempted to create already exists. - -For example, running an operation that saves a file -(e.g. [`tf.train.Saver.save()`](../../api_docs/python/train.md#Saver.save)) -could potentially raise this exception if an explicit filename for an -existing file was passed. - -- - - - -#### `tf.errors.AlreadyExistsError.__init__(node_def, op, message)` {#AlreadyExistsError.__init__} - -Creates an `AlreadyExistsError`. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.errors.InternalError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.errors.InternalError.md deleted file mode 100644 index dd229d2a3d..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.errors.InternalError.md +++ /dev/null @@ -1,12 +0,0 @@ -Raised when the system experiences an internal error. - -This exception is raised when some invariant expected by the runtime -has been broken. Catching this exception is not recommended. - -- - - - -#### `tf.errors.InternalError.__init__(node_def, op, message)` {#InternalError.__init__} - -Creates an `InternalError`. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.errors.OutOfRangeError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.errors.OutOfRangeError.md new file mode 100644 index 0000000000..ef996b0a88 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.errors.OutOfRangeError.md @@ -0,0 +1,15 @@ +Raised when an operation iterates past the valid input range. + +This exception is raised in "end-of-file" conditions, such as when a +[`queue.dequeue()`](../../api_docs/python/io_ops.md#QueueBase.dequeue) +operation is blocked on an empty queue, and a +[`queue.close()`](../../api_docs/python/io_ops.md#QueueBase.close) +operation executes. + +- - - + +#### `tf.errors.OutOfRangeError.__init__(node_def, op, message)` {#OutOfRangeError.__init__} + +Creates an `OutOfRangeError`. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.errors.UnavailableError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.errors.UnavailableError.md deleted file mode 100644 index e212ae94ec..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.errors.UnavailableError.md +++ /dev/null @@ -1,11 +0,0 @@ -Raised when the runtime is currently unavailable. - -This exception is not currently used. - -- - - - -#### `tf.errors.UnavailableError.__init__(node_def, op, message)` {#UnavailableError.__init__} - -Creates an `UnavailableError`. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.fft3d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.fft3d.md deleted file mode 100644 index 7214e3ae20..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.fft3d.md +++ /dev/null @@ -1,14 +0,0 @@ -### `tf.fft3d(input, name=None)` {#fft3d} - -Compute the 3-dimensional discrete Fourier Transform. - -##### Args: - - -* `input`: A `Tensor` of type `complex64`. A complex64 3-D tensor. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `complex64`. The 3D Fourier Transform of `input`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.floor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.floor.md new file mode 100644 index 0000000000..4aadcff6ef --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.floor.md @@ -0,0 +1,14 @@ +### `tf.floor(x, name=None)` {#floor} + +Returns element-wise largest integer not greater than x. + +##### Args: + + +* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `x`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.floordiv.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.floordiv.md new file mode 100644 index 0000000000..8f824e867e --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.floordiv.md @@ -0,0 +1,32 @@ +### `tf.floordiv(x, y, name=None)` {#floordiv} + +Divides `x / y` elementwise, rounding down for floating point. + +The same as `tf.div(x,y)` for integers, but uses `tf.floor(tf.div(x,y))` for +floating point arguments so that the result is always an integer (though +possibly an integer represented as floating point). This op is generated by +`x // y` floor division in Python 3 and in Python 2.7 with +`from __future__ import division`. + +Note that for efficiency, `floordiv` uses C semantics for negative numbers +(unlike Python and Numpy). + +`x` and `y` must have the same type, and the result will have the same type +as well. + +##### Args: + + +* `x`: `Tensor` numerator of real numeric type. +* `y`: `Tensor` denominator of real numeric type. +* `name`: A name for the operation (optional). + +##### Returns: + + `x / y` rounded down (except possibly towards zero for negative integers). + +##### Raises: + + +* `TypeError`: If the inputs are complex. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.foldl.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.foldl.md deleted file mode 100644 index dac4268165..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.foldl.md +++ /dev/null @@ -1,44 +0,0 @@ -### `tf.foldl(fn, elems, initializer=None, parallel_iterations=10, back_prop=True, swap_memory=False, name=None)` {#foldl} - -foldl on the list of tensors unpacked from `elems` on dimension 0. - -This foldl operator repeatedly applies the callable `fn` to a sequence -of elements from first to last. The elements are made of the tensors -unpacked from `elems` on dimension 0. The callable fn takes two tensors as -arguments. The first argument is the accumulated value computed from the -preceding invocation of fn. If `initializer` is None, `elems` must contain -at least one element, and its first element is used as the initializer. - -Suppose that `elems` is unpacked into `values`, a list of tensors. The shape -of the result tensor is fn(initializer, values[0]).shape`. - -##### Args: - - -* `fn`: The callable to be performed. -* `elems`: A tensor to be unpacked on dimension 0. -* `initializer`: (optional) The initial value for the accumulator. -* `parallel_iterations`: (optional) The number of iterations allowed to run - in parallel. -* `back_prop`: (optional) True enables back propagation. -* `swap_memory`: (optional) True enables GPU-CPU memory swapping. -* `name`: (optional) Name prefix for the returned tensors. - -##### Returns: - - A tensor resulting from applying `fn` consecutively to the list of tensors - unpacked from `elems`, from first to last. - -##### Raises: - - -* `TypeError`: if `fn` is not callable. - -##### Example: - - ```python - elems = [1, 2, 3, 4, 5, 6] - sum = foldl(lambda a, x: a + x, elems) - # sum == 21 - ``` - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.gather.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.gather.md new file mode 100644 index 0000000000..f3ae59bbb6 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.gather.md @@ -0,0 +1,35 @@ +### `tf.gather(params, indices, validate_indices=None, name=None)` {#gather} + +Gather slices from `params` according to `indices`. + +`indices` must be an integer tensor of any dimension (usually 0-D or 1-D). +Produces an output tensor with shape `indices.shape + params.shape[1:]` where: + + # Scalar indices + output[:, ..., :] = params[indices, :, ... :] + + # Vector indices + output[i, :, ..., :] = params[indices[i], :, ... :] + + # Higher rank indices + output[i, ..., j, :, ... :] = params[indices[i, ..., j], :, ..., :] + +If `indices` is a permutation and `len(indices) == params.shape[0]` then +this operation will permute `params` accordingly. + +
+ +
+ +##### Args: + + +* `params`: A `Tensor`. +* `indices`: A `Tensor`. Must be one of the following types: `int32`, `int64`. +* `validate_indices`: An optional `bool`. Defaults to `True`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `params`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.get_default_session.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.get_default_session.md deleted file mode 100644 index c564366e8b..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.get_default_session.md +++ /dev/null @@ -1,16 +0,0 @@ -### `tf.get_default_session()` {#get_default_session} - -Returns the default session for the current thread. - -The returned `Session` will be the innermost session on which a -`Session` or `Session.as_default()` context has been entered. - -NOTE: The default session is a property of the current thread. If you -create a new thread, and wish to use the default session in that -thread, you must explicitly add a `with sess.as_default():` in that -thread's function. - -##### Returns: - - The default `Session` being used in the current thread. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.get_session_handle.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.get_session_handle.md deleted file mode 100644 index 3fdd2b0ae9..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.get_session_handle.md +++ /dev/null @@ -1,38 +0,0 @@ -### `tf.get_session_handle(data, name=None)` {#get_session_handle} - -Return the handle of `data`. - -This is EXPERIMENTAL and subject to change. - -Keep `data` "in-place" in the runtime and create a handle that can be -used to retrieve `data` in a subsequent run(). - -Combined with `get_session_tensor`, we can keep a tensor produced in -one run call in place, and use it as the input in a future run call. -Below is a simple example: - -```python -c = tf.mul(a, b) -h = tf.get_session_handle(c) -h = sess.run(h) - -p, a = tf.get_session_tensor(tf.float32) -b = tf.mul(a, 10) -c = sess.run(b, feed_dict={p: h.handle}) -``` - -##### Args: - - -* `data`: A tensor to be stored in the session. -* `name`: Optional name prefix for the return tensor. - -##### Returns: - - A scalar string tensor representing a unique handle for `data`. - -##### Raises: - - -* `TypeError`: if `data` is not a Tensor. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.get_variable_scope.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.get_variable_scope.md deleted file mode 100644 index 4a0d3bc775..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.get_variable_scope.md +++ /dev/null @@ -1,4 +0,0 @@ -### `tf.get_variable_scope()` {#get_variable_scope} - -Returns the current variable scope. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.greater_equal.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.greater_equal.md deleted file mode 100644 index 9d68429c36..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.greater_equal.md +++ /dev/null @@ -1,15 +0,0 @@ -### `tf.greater_equal(x, y, name=None)` {#greater_equal} - -Returns the truth value of (x >= y) element-wise. - -##### Args: - - -* `x`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`. -* `y`: A `Tensor`. Must have the same type as `x`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `bool`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.histogram_fixed_width.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.histogram_fixed_width.md new file mode 100644 index 0000000000..4a2997103b --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.histogram_fixed_width.md @@ -0,0 +1,38 @@ +### `tf.histogram_fixed_width(values, value_range, nbins=100, dtype=tf.int32, name=None)` {#histogram_fixed_width} + +Return histogram of values. + +Given the tensor `values`, this operation returns a rank 1 histogram counting +the number of entries in `values` that fell into every bin. The bins are +equal width and determined by the arguments `value_range` and `nbins`. + +##### Args: + + +* `values`: Numeric `Tensor`. +* `value_range`: Shape [2] `Tensor`. new_values <= value_range[0] will be + mapped to hist[0], values >= value_range[1] will be mapped to hist[-1]. + Must be same dtype as new_values. +* `nbins`: Scalar `int32 Tensor`. Number of histogram bins. +* `dtype`: dtype for returned histogram. +* `name`: A name for this operation (defaults to 'histogram_fixed_width'). + +##### Returns: + + A 1-D `Tensor` holding histogram of values. + + +* `Examples`: + +```python +# Bins will be: (-inf, 1), [1, 2), [2, 3), [3, 4), [4, inf) +nbins = 5 +value_range = [0.0, 5.0] +new_values = [-1.0, 0.0, 1.5, 2.0, 5.0, 15] + +with tf.default_session() as sess: + hist = tf.histogram_fixed_width(new_values, value_range, nbins=5) + variables.initialize_all_variables().run() + sess.run(hist) => [2, 1, 1, 0, 2] +``` + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.ifft.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.ifft.md new file mode 100644 index 0000000000..26582404f6 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.ifft.md @@ -0,0 +1,15 @@ +### `tf.ifft(input, name=None)` {#ifft} + +Compute the inverse 1-dimensional discrete Fourier Transform. + +##### Args: + + +* `input`: A `Tensor` of type `complex64`. A complex64 vector. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `complex64`. + The inverse 1D Fourier Transform of `input`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.imag.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.imag.md deleted file mode 100644 index 1dfcadbb95..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.imag.md +++ /dev/null @@ -1,27 +0,0 @@ -### `tf.imag(input, name=None)` {#imag} - -Returns the imaginary part of a complex number. - -Given a tensor `input` of complex numbers, this operation returns a tensor of -type `float` or `double` that is the imaginary part of each element in -`input`. All elements in `input` must be complex numbers of the form \(a + -bj\), where *a* is the real part and *b* is the imaginary part returned by -this operation. - -For example: - -``` -# tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j] -tf.imag(input) ==> [4.75, 5.75] -``` - -##### Args: - - -* `input`: A `Tensor`. Must be one of the following types: `complex64`, `complex128`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `float` or `double`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.central_crop.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.central_crop.md new file mode 100644 index 0000000000..4e6b6115f8 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.central_crop.md @@ -0,0 +1,30 @@ +### `tf.image.central_crop(image, central_fraction)` {#central_crop} + +Crop the central region of the image. + +Remove the outer parts of an image but retain the central region of the image +along each dimension. If we specify central_fraction = 0.5, this function +returns the region marked with "X" in the below diagram. + + -------- + | | + | XXXX | + | XXXX | + | | where "X" is the central 50% of the image. + -------- + +##### Args: + + +* `image`: 3-D float Tensor of shape [height, width, depth] +* `central_fraction`: float (0, 1], fraction of size to crop + +##### Raises: + + +* `ValueError`: if central_crop_fraction is not within (0, 1]. + +##### Returns: + + 3-D float Tensor + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.crop_to_bounding_box.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.crop_to_bounding_box.md deleted file mode 100644 index 4724ff5eb9..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.crop_to_bounding_box.md +++ /dev/null @@ -1,30 +0,0 @@ -### `tf.image.crop_to_bounding_box(image, offset_height, offset_width, target_height, target_width)` {#crop_to_bounding_box} - -Crops an image to a specified bounding box. - -This op cuts a rectangular part out of `image`. The top-left corner of the -returned image is at `offset_height, offset_width` in `image`, and its -lower-right corner is at -`offset_height + target_height, offset_width + target_width`. - -##### Args: - - -* `image`: 3-D tensor with shape `[height, width, channels]` -* `offset_height`: Vertical coordinate of the top-left corner of the result in - the input. -* `offset_width`: Horizontal coordinate of the top-left corner of the result in - the input. -* `target_height`: Height of the result. -* `target_width`: Width of the result. - -##### Returns: - - 3-D tensor of image with shape `[target_height, target_width, channels]` - -##### Raises: - - -* `ValueError`: If the shape of `image` is incompatible with the `offset_*` or - `target_*` arguments - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.decode_png.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.decode_png.md new file mode 100644 index 0000000000..4332af7704 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.decode_png.md @@ -0,0 +1,30 @@ +### `tf.image.decode_png(contents, channels=None, dtype=None, name=None)` {#decode_png} + +Decode a PNG-encoded image to a uint8 or uint16 tensor. + +The attr `channels` indicates the desired number of color channels for the +decoded image. + +Accepted values are: + +* 0: Use the number of channels in the PNG-encoded image. +* 1: output a grayscale image. +* 3: output an RGB image. +* 4: output an RGBA image. + +If needed, the PNG-encoded image is transformed to match the requested number +of color channels. + +##### Args: + + +* `contents`: A `Tensor` of type `string`. 0-D. The PNG-encoded image. +* `channels`: An optional `int`. Defaults to `0`. + Number of color channels for the decoded image. +* `dtype`: An optional `tf.DType` from: `tf.uint8, tf.uint16`. Defaults to `tf.uint8`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `dtype`. 3-D with shape `[height, width, channels]`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.draw_bounding_boxes.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.draw_bounding_boxes.md new file mode 100644 index 0000000000..0e1c6115c7 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.draw_bounding_boxes.md @@ -0,0 +1,32 @@ +### `tf.image.draw_bounding_boxes(images, boxes, name=None)` {#draw_bounding_boxes} + +Draw bounding boxes on a batch of images. + +Outputs a copy of `images` but draws on top of the pixels zero or more bounding +boxes specified by the locations in `boxes`. The coordinates of the each +bounding box in `boxes are encoded as `[y_min, x_min, y_max, x_max]`. The +bounding box coordinates are floats in `[0.0, 1.0]` relative to the width and +height of the underlying image. + +For example, if an image is 100 x 200 pixels and the bounding box is +`[0.1, 0.5, 0.2, 0.9]`, the bottom-left and upper-right coordinates of the +bounding box will be `(10, 40)` to `(50, 180)`. + +Parts of the bounding box may fall outside the image. + +##### Args: + + +* `images`: A `Tensor`. Must be one of the following types: `float32`, `half`. + 4-D with shape `[batch, height, width, depth]`. A batch of images. +* `boxes`: A `Tensor` of type `float32`. + 3-D with shape `[batch, num_bounding_boxes, 4]` containing bounding + boxes. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `images`. + 4-D with the same shape as `images`. The batch of input images with + bounding boxes drawn on the images. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.per_image_whitening.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.per_image_whitening.md new file mode 100644 index 0000000000..8f72af6a31 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.per_image_whitening.md @@ -0,0 +1,29 @@ +### `tf.image.per_image_whitening(image)` {#per_image_whitening} + +Linearly scales `image` to have zero mean and unit norm. + +This op computes `(x - mean) / adjusted_stddev`, where `mean` is the average +of all values in image, and +`adjusted_stddev = max(stddev, 1.0/sqrt(image.NumElements()))`. + +`stddev` is the standard deviation of all values in `image`. It is capped +away from zero to protect against division by 0 when handling uniform images. + +Note that this implementation is limited: +* It only whitens based on the statistics of an individual image. +* It does not take into account the covariance structure. + +##### Args: + + +* `image`: 3-D tensor of shape `[height, width, channels]`. + +##### Returns: + + The whitened image with same shape as `image`. + +##### Raises: + + +* `ValueError`: if the shape of 'image' is incompatible with this function. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.random_flip_left_right.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.random_flip_left_right.md new file mode 100644 index 0000000000..d063895136 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.random_flip_left_right.md @@ -0,0 +1,24 @@ +### `tf.image.random_flip_left_right(image, seed=None)` {#random_flip_left_right} + +Randomly flip an image horizontally (left to right). + +With a 1 in 2 chance, outputs the contents of `image` flipped along the +second dimension, which is `width`. Otherwise output the image as-is. + +##### Args: + + +* `image`: A 3-D tensor of shape `[height, width, channels].` +* `seed`: A Python integer. Used to create a random seed. See + [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) + for behavior. + +##### Returns: + + A 3-D tensor of the same type and shape as `image`. + +##### Raises: + + +* `ValueError`: if the shape of `image` not supported. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.resize_image_with_crop_or_pad.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.resize_image_with_crop_or_pad.md deleted file mode 100644 index c93111bd99..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.resize_image_with_crop_or_pad.md +++ /dev/null @@ -1,30 +0,0 @@ -### `tf.image.resize_image_with_crop_or_pad(image, target_height, target_width)` {#resize_image_with_crop_or_pad} - -Crops and/or pads an image to a target width and height. - -Resizes an image to a target width and height by either centrally -cropping the image or padding it evenly with zeros. - -If `width` or `height` is greater than the specified `target_width` or -`target_height` respectively, this op centrally crops along that dimension. -If `width` or `height` is smaller than the specified `target_width` or -`target_height` respectively, this op centrally pads with 0 along that -dimension. - -##### Args: - - -* `image`: 3-D tensor of shape [height, width, channels] -* `target_height`: Target height. -* `target_width`: Target width. - -##### Raises: - - -* `ValueError`: if `target_height` or `target_width` are zero or negative. - -##### Returns: - - Cropped and/or padded image of shape - `[target_height, target_width, channels]` - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.transpose_image.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.transpose_image.md new file mode 100644 index 0000000000..1cc527d345 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.transpose_image.md @@ -0,0 +1,20 @@ +### `tf.image.transpose_image(image)` {#transpose_image} + +Transpose an image by swapping the first and second dimension. + +See also `transpose()`. + +##### Args: + + +* `image`: 3-D tensor of shape `[height, width, channels]` + +##### Returns: + + A 3-D tensor of shape `[width, height, channels]` + +##### Raises: + + +* `ValueError`: if the shape of `image` not supported. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.initialize_all_variables.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.initialize_all_variables.md deleted file mode 100644 index 9a0e5d8261..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.initialize_all_variables.md +++ /dev/null @@ -1,10 +0,0 @@ -### `tf.initialize_all_variables()` {#initialize_all_variables} - -Returns an Op that initializes all variables. - -This is just a shortcut for `initialize_variables(all_variables())` - -##### Returns: - - An Op that initializes all variables in the graph. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.invert_permutation.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.invert_permutation.md new file mode 100644 index 0000000000..b12cc7e94c --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.invert_permutation.md @@ -0,0 +1,30 @@ +### `tf.invert_permutation(x, name=None)` {#invert_permutation} + +Computes the inverse permutation of a tensor. + +This operation computes the inverse of an index permutation. It takes a 1-D +integer tensor `x`, which represents the indices of a zero-based array, and +swaps each value with its index position. In other words, for an output tensor +`y` and an input tensor `x`, this operation computes the following: + +`y[x[i]] = i for i in [0, 1, ..., len(x) - 1]` + +The values must include 0. There can be no duplicate values or negative values. + +For example: + +```prettyprint +# tensor `x` is [3, 4, 0, 2, 1] +invert_permutation(x) ==> [2, 4, 3, 0, 1] +``` + +##### Args: + + +* `x`: A `Tensor` of type `int32`. 1-D. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `int32`. 1-D. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.is_nan.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.is_nan.md deleted file mode 100644 index 1bf3a6825c..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.is_nan.md +++ /dev/null @@ -1,14 +0,0 @@ -### `tf.is_nan(x, name=None)` {#is_nan} - -Returns which elements of x are NaN. - -##### Args: - - -* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `bool`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.is_numeric_tensor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.is_numeric_tensor.md new file mode 100644 index 0000000000..c2e61b856d --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.is_numeric_tensor.md @@ -0,0 +1,4 @@ +### `tf.is_numeric_tensor(tensor)` {#is_numeric_tensor} + + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.load_op_library.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.load_op_library.md new file mode 100644 index 0000000000..0f38dfe4d5 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.load_op_library.md @@ -0,0 +1,24 @@ +### `tf.load_op_library(library_filename)` {#load_op_library} + +Loads a TensorFlow plugin, containing custom ops and kernels. + +Pass "library_filename" to a platform-specific mechanism for dynamically +loading a library. The rules for determining the exact location of the +library are platform-specific and are not documented here. + +##### Args: + + +* `library_filename`: Path to the plugin. + Relative or absolute filesystem path to a dynamic library file. + +##### Returns: + + A python module containing the Python wrappers for Ops defined in + the plugin. + +##### Raises: + + +* `RuntimeError`: when unable to load the library or get the python wrappers. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.logical_and.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.logical_and.md deleted file mode 100644 index dd5b563c8b..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.logical_and.md +++ /dev/null @@ -1,15 +0,0 @@ -### `tf.logical_and(x, y, name=None)` {#logical_and} - -Returns the truth value of x AND y element-wise. - -##### Args: - - -* `x`: A `Tensor` of type `bool`. -* `y`: A `Tensor` of type `bool`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `bool`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.map_fn.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.map_fn.md new file mode 100644 index 0000000000..1892d7b03c --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.map_fn.md @@ -0,0 +1,42 @@ +### `tf.map_fn(fn, elems, dtype=None, parallel_iterations=10, back_prop=True, swap_memory=False, name=None)` {#map_fn} + +map on the list of tensors unpacked from `elems` on dimension 0. + +This map operator repeatedly applies the callable `fn` to a sequence of +elements from first to last. The elements are made of the tensors unpacked +from `elems`. `dtype` is the data type of the return value of `fn`. Users +must provide `dtype` if it is different from the data type of `elems`. + +Suppose that `elems` is unpacked into `values`, a list of tensors. The shape +of the result tensor is `[len(values)] + fn(values[0]).shape`. + +##### Args: + + +* `fn`: The callable to be performed. +* `elems`: A tensor to be unpacked to apply `fn`. +* `dtype`: (optional) The output type of `fn`. +* `parallel_iterations`: (optional) The number of iterations allowed to run + in parallel. +* `back_prop`: (optional) True enables back propagation. +* `swap_memory`: (optional) True enables GPU-CPU memory swapping. +* `name`: (optional) Name prefix for the returned tensors. + +##### Returns: + + A tensor that packs the results of applying `fn` to the list of tensors + unpacked from `elems`, from first to last. + +##### Raises: + + +* `TypeError`: if `fn` is not callable. + +##### Example: + + ```python + elems = [1, 2, 3, 4, 5, 6] + squares = map_fn(lambda x: x * x, elems) + # squares == [1, 4, 9, 16, 25, 36] + ``` + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.matmul.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.matmul.md deleted file mode 100644 index 6602562ecc..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.matmul.md +++ /dev/null @@ -1,46 +0,0 @@ -### `tf.matmul(a, b, transpose_a=False, transpose_b=False, a_is_sparse=False, b_is_sparse=False, name=None)` {#matmul} - -Multiplies matrix `a` by matrix `b`, producing `a` * `b`. - -The inputs must be two-dimensional matrices, with matching inner dimensions, -possibly after transposition. - -Both matrices must be of the same type. The supported types are: -`float`, `double`, `int32`, `complex64`. - -Either matrix can be transposed on the fly by setting the corresponding flag -to `True`. This is `False` by default. - -If one or both of the matrices contain a lot of zeros, a more efficient -multiplication algorithm can be used by setting the corresponding -`a_is_sparse` or `b_is_sparse` flag to `True`. These are `False` by default. - -For example: - -```python -# 2-D tensor `a` -a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3]) => [[1. 2. 3.] - [4. 5. 6.]] -# 2-D tensor `b` -b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2]) => [[7. 8.] - [9. 10.] - [11. 12.]] -c = tf.matmul(a, b) => [[58 64] - [139 154]] -``` - -##### Args: - - -* `a`: `Tensor` of type `float`, `double`, `int32` or `complex64`. -* `b`: `Tensor` with same type as `a`. -* `transpose_a`: If `True`, `a` is transposed before multiplication. -* `transpose_b`: If `True`, `b` is transposed before multiplication. -* `a_is_sparse`: If `True`, `a` is treated as a sparse matrix. -* `b_is_sparse`: If `True`, `b` is treated as a sparse matrix. -* `name`: Name for the operation (optional). - -##### Returns: - - A `Tensor` of the same type as `a`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.matrix_determinant.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.matrix_determinant.md new file mode 100644 index 0000000000..a5cd5a7fe6 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.matrix_determinant.md @@ -0,0 +1,16 @@ +### `tf.matrix_determinant(input, name=None)` {#matrix_determinant} + +Calculates the determinant of a square matrix. + +##### Args: + + +* `input`: A `Tensor`. Must be one of the following types: `float32`, `float64`. + A tensor of shape `[M, M]`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `input`. + A scalar, equal to the determinant of the input. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.matrix_solve_ls.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.matrix_solve_ls.md deleted file mode 100644 index 8f5548d2cb..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.matrix_solve_ls.md +++ /dev/null @@ -1,47 +0,0 @@ -### `tf.matrix_solve_ls(matrix, rhs, l2_regularizer=0.0, fast=True, name=None)` {#matrix_solve_ls} - -Solves a linear least-squares problem. - -Below we will use the following notation -`matrix`=\\(A \in \Re^{m \times n}\\), -`rhs`=\\(B \in \Re^{m \times k}\\), -`output`=\\(X \in \Re^{n \times k}\\), -`l2_regularizer`=\\(\lambda\\). - -If `fast` is `True`, then the solution is computed by solving the normal -equations using Cholesky decomposition. Specifically, if \\(m \ge n\\) then -\\(X = (A^T A + \lambda I)^{-1} A^T B\\), which solves the regularized -least-squares problem \\(X = \mathrm{argmin}_{Z \in \Re^{n \times k}} -||A Z - B||_F^2 + \lambda ||Z||_F^2\\). If \\(m \lt n\\) then `output` is -computed as \\(X = A^T (A A^T + \lambda I)^{-1} B\\), -which (for \\(\lambda = 0\\)) is the minimum-norm solution to the -under-determined linear system, i.e. -\\(X = \mathrm{argmin}_{Z \in \Re^{n \times k}} ||Z||_F^2 \\), -subject to \\(A Z = B\\). -Notice that the fast path is only numerically stable when \\(A\\) is -numerically full rank and has a condition number -\\(\mathrm{cond}(A) \lt \frac{1}{\sqrt{\epsilon_{mach}}}\\) -or \\(\lambda\\) is sufficiently large. - -If `fast` is `False` then the solution is computed using the rank revealing -QR decomposition with column pivoting. This will always compute a -least-squares solution that minimizes the residual norm -\\(||A X - B||_F^2 \\), even when \\(A\\) is rank deficient or -ill-conditioned. Notice: The current version does not compute a minimum norm -solution. If `fast` is `False` then `l2_regularizer` is ignored. - -##### Args: - - -* `matrix`: 2-D `Tensor` of shape `[M, N]`. -* `rhs`: 2-D `Tensor` of shape is `[M, K]`. -* `l2_regularizer`: 0-D `double` `Tensor`. Ignored if `fast=False`. -* `fast`: bool. Defaults to `True`. -* `name`: string, optional name of the operation. - -##### Returns: - - -* `output`: Matrix of shape `[N, K]` containing the matrix that solves - `matrix * output = rhs` in the least-squares sense. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.minimum.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.minimum.md new file mode 100644 index 0000000000..bff13483f4 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.minimum.md @@ -0,0 +1,15 @@ +### `tf.minimum(x, y, name=None)` {#minimum} + +Returns the min of x and y (i.e. x < y ? x : y) element-wise, broadcasts. + +##### Args: + + +* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`. +* `y`: A `Tensor`. Must have the same type as `x`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `x`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.avg_pool.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.avg_pool.md deleted file mode 100644 index 33da8534c2..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.avg_pool.md +++ /dev/null @@ -1,25 +0,0 @@ -### `tf.nn.avg_pool(value, ksize, strides, padding, data_format='NHWC', name=None)` {#avg_pool} - -Performs the average pooling on the input. - -Each entry in `output` is the mean of the corresponding size `ksize` -window in `value`. - -##### Args: - - -* `value`: A 4-D `Tensor` of shape `[batch, height, width, channels]` and type - `float32`, `float64`, `qint8`, `quint8`, or `qint32`. -* `ksize`: A list of ints that has length >= 4. - The size of the window for each dimension of the input tensor. -* `strides`: A list of ints that has length >= 4. - The stride of the sliding window for each dimension of the - input tensor. -* `padding`: A string, either `'VALID'` or `'SAME'`. The padding algorithm. -* `data_format`: A string. 'NHWC' and 'NCHW' are supported. -* `name`: Optional name for the operation. - -##### Returns: - - A `Tensor` with the same type as `value`. The average pooled output tensor. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.batch_normalization.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.batch_normalization.md deleted file mode 100644 index eda1d7d053..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.batch_normalization.md +++ /dev/null @@ -1,46 +0,0 @@ -### `tf.nn.batch_normalization(x, mean, variance, offset, scale, variance_epsilon, name=None)` {#batch_normalization} - -Batch normalization. - -As described in http://arxiv.org/abs/1502.03167. -Normalizes a tensor by `mean` and `variance`, and applies (optionally) a -`scale` \\(\gamma\\) to it, as well as an `offset` \\(\beta\\): - -\\(\frac{\gamma(x-\mu)}{\sigma}+\beta\\) - -`mean`, `variance`, `offset` and `scale` are all expected to be of one of two -shapes: - * In all generality, they can have the same number of dimensions as the - input `x`, with identical sizes as `x` for the dimensions that are not - normalized over (the 'depth' dimension(s)), and dimension 1 for the - others which are being normalized over. - `mean` and `variance` in this case would typically be the outputs of - `tf.nn.moments(..., keep_dims=True)` during training, or running averages - thereof during inference. - * In the common case where the 'depth' dimension is the last dimension in - the input tensor `x`, they may be one dimensional tensors of the same - size as the 'depth' dimension. - This is the case for example for the common `[batch, depth]` layout of - fully-connected layers, and `[batch, height, width, depth]` for - convolutions. - `mean` and `variance` in this case would typically be the outputs of - `tf.nn.moments(..., keep_dims=False)` during training, or running averages - thereof during inference. - -##### Args: - - -* `x`: Input `Tensor` of arbitrary dimensionality. -* `mean`: A mean `Tensor`. -* `variance`: A variance `Tensor`. -* `offset`: An offset `Tensor`, often denoted \\(\beta\\) in equations, or - None. If present, will be added to the normalized tensor. -* `scale`: A scale `Tensor`, often denoted \\(\gamma\\) in equations, or - `None`. If present, the scale is applied to the normalized tensor. -* `variance_epsilon`: A small float number to avoid dividing by 0. -* `name`: A name for this operation (optional). - -##### Returns: - - the normalized, scaled, offset tensor. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.embedding_lookup.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.embedding_lookup.md new file mode 100644 index 0000000000..588c2b393d --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.embedding_lookup.md @@ -0,0 +1,50 @@ +### `tf.nn.embedding_lookup(params, ids, partition_strategy='mod', name=None, validate_indices=True)` {#embedding_lookup} + +Looks up `ids` in a list of embedding tensors. + +This function is used to perform parallel lookups on the list of +tensors in `params`. It is a generalization of +[`tf.gather()`](../../api_docs/python/array_ops.md#gather), where `params` is +interpreted as a partition of a larger embedding tensor. + +If `len(params) > 1`, each element `id` of `ids` is partitioned between +the elements of `params` according to the `partition_strategy`. +In all strategies, if the id space does not evenly divide the number of +partitions, each of the first `(max_id + 1) % len(params)` partitions will +be assigned one more id. + +If `partition_strategy` is `"mod"`, we assign each id to partition +`p = id % len(params)`. For instance, +13 ids are split across 5 partitions as: +`[[0, 5, 10], [1, 6, 11], [2, 7, 12], [3, 8], [4, 9]]` + +If `partition_strategy` is `"div"`, we assign ids to partitions in a +contiguous manner. In this case, 13 ids are split across 5 partitions as: +`[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]` + +The results of the lookup are concatenated into a dense +tensor. The returned tensor has shape `shape(ids) + shape(params)[1:]`. + +##### Args: + + +* `params`: A list of tensors with the same type and which can be concatenated + along dimension 0. Each `Tensor` must be appropriately sized for the given + `partition_strategy`. +* `ids`: A `Tensor` with type `int32` or `int64` containing the ids to be looked + up in `params`. +* `partition_strategy`: A string specifying the partitioning strategy, relevant + if `len(params) > 1`. Currently `"div"` and `"mod"` are supported. Default + is `"mod"`. +* `name`: A name for the operation (optional). +* `validate_indices`: Whether or not to validate gather indices. + +##### Returns: + + A `Tensor` with the same type as the tensors in `params`. + +##### Raises: + + +* `ValueError`: If `params` is empty. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.learned_unigram_candidate_sampler.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.learned_unigram_candidate_sampler.md deleted file mode 100644 index 4f69938e59..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.learned_unigram_candidate_sampler.md +++ /dev/null @@ -1,53 +0,0 @@ -### `tf.nn.learned_unigram_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, seed=None, name=None)` {#learned_unigram_candidate_sampler} - -Samples a set of classes from a distribution learned during training. - -This operation randomly samples a tensor of sampled classes -(`sampled_candidates`) from the range of integers `[0, range_max)`. - -The elements of `sampled_candidates` are drawn without replacement -(if `unique=True`) or with replacement (if `unique=False`) from -the base distribution. - -The base distribution for this operation is constructed on the fly -during training. It is a unigram distribution over the target -classes seen so far during training. Every integer in `[0, range_max)` -begins with a weight of 1, and is incremented by 1 each time it is -seen as a target class. The base distribution is not saved to checkpoints, -so it is reset when the model is reloaded. - -In addition, this operation returns tensors `true_expected_count` -and `sampled_expected_count` representing the number of times each -of the target classes (`true_classes`) and the sampled -classes (`sampled_candidates`) is expected to occur in an average -tensor of sampled classes. These values correspond to `Q(y|x)` -defined in [this -document](http://www.tensorflow.org/extras/candidate_sampling.pdf). -If `unique=True`, then these are post-rejection probabilities and we -compute them approximately. - -##### Args: - - -* `true_classes`: A `Tensor` of type `int64` and shape `[batch_size, - num_true]`. The target classes. -* `num_true`: An `int`. The number of target classes per training example. -* `num_sampled`: An `int`. The number of classes to randomly sample per batch. -* `unique`: A `bool`. Determines whether all sampled classes in a batch are - unique. -* `range_max`: An `int`. The number of possible classes. -* `seed`: An `int`. An operation-specific seed. Default is 0. -* `name`: A name for the operation (optional). - -##### Returns: - - -* `sampled_candidates`: A tensor of type `int64` and shape `[num_sampled]`. - The sampled classes. -* `true_expected_count`: A tensor of type `float`. Same shape as - `true_classes`. The expected counts under the sampling distribution - of each of `true_classes`. -* `sampled_expected_count`: A tensor of type `float`. Same shape as - `sampled_candidates`. The expected counts under the sampling distribution - of each of `sampled_candidates`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.log_softmax.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.log_softmax.md new file mode 100644 index 0000000000..18e1f96590 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.log_softmax.md @@ -0,0 +1,19 @@ +### `tf.nn.log_softmax(logits, name=None)` {#log_softmax} + +Computes log softmax activations. + +For each batch `i` and class `j` we have + + logsoftmax[i, j] = logits[i, j] - log(sum(exp(logits[i]))) + +##### Args: + + +* `logits`: A `Tensor`. Must be one of the following types: `float32`, `float64`. + 2-D with shape `[batch_size, num_classes]`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `logits`. Same shape as `logits`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.max_pool3d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.max_pool3d.md new file mode 100644 index 0000000000..471fcb532f --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.max_pool3d.md @@ -0,0 +1,23 @@ +### `tf.nn.max_pool3d(input, ksize, strides, padding, name=None)` {#max_pool3d} + +Performs 3D max pooling on the input. + +##### Args: + + +* `input`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`. + Shape `[batch, depth, rows, cols, channels]` tensor to pool over. +* `ksize`: A list of `ints` that has length `>= 5`. + 1-D tensor of length 5. The size of the window for each dimension of + the input tensor. Must have `ksize[0] = ksize[1] = 1`. +* `strides`: A list of `ints` that has length `>= 5`. + 1-D tensor of length 5. The stride of the sliding window for each + dimension of `input`. Must have `strides[0] = strides[4] = 1`. +* `padding`: A `string` from: `"SAME", "VALID"`. + The type of padding algorithm to use. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `input`. The max pooled output tensor. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.moments.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.moments.md new file mode 100644 index 0000000000..704bb5ba49 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.moments.md @@ -0,0 +1,30 @@ +### `tf.nn.moments(x, axes, shift=None, name=None, keep_dims=False)` {#moments} + +Calculate the mean and variance of `x`. + +The mean and variance are calculated by aggregating the contents of `x` +across `axes`. If `x` is 1-D and `axes = [0]` this is just the mean +and variance of a vector. + +When using these moments for batch normalization (see +`tf.nn.batch_normalization`): + * for so-called "global normalization", used with convolutional filters with + shape `[batch, height, width, depth]`, pass `axes=[0, 1, 2]`. + * for simple batch normalization pass `axes=[0]` (batch only). + +##### Args: + + +* `x`: A `Tensor`. +* `axes`: array of ints. Axes along which to compute mean and + variance. +* `shift`: A `Tensor` containing the value by which to shift the data for + numerical stability, or `None` if no shift is to be performed. A shift + close to the true mean provides the most numerically stable results. +* `keep_dims`: produce moments with the same dimensionality as the input. +* `name`: Name used to scope the operations that compute the moments. + +##### Returns: + + Two `Tensor` objects: `mean` and `variance`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.relu.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.relu.md new file mode 100644 index 0000000000..5811a1da96 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.relu.md @@ -0,0 +1,14 @@ +### `tf.nn.relu(features, name=None)` {#relu} + +Computes rectified linear: `max(features, 0)`. + +##### Args: + + +* `features`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `features`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.softmax.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.softmax.md deleted file mode 100644 index be31bb2093..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.softmax.md +++ /dev/null @@ -1,19 +0,0 @@ -### `tf.nn.softmax(logits, name=None)` {#softmax} - -Computes softmax activations. - -For each batch `i` and class `j` we have - - softmax[i, j] = exp(logits[i, j]) / sum(exp(logits[i])) - -##### Args: - - -* `logits`: A `Tensor`. Must be one of the following types: `float32`, `float64`. - 2-D with shape `[batch_size, num_classes]`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `logits`. Same shape as `logits`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.softsign.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.softsign.md deleted file mode 100644 index 971b2a8134..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.softsign.md +++ /dev/null @@ -1,14 +0,0 @@ -### `tf.nn.softsign(features, name=None)` {#softsign} - -Computes softsign: `features / (abs(features) + 1)`. - -##### Args: - - -* `features`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `features`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.sufficient_statistics.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.sufficient_statistics.md deleted file mode 100644 index 92cb5596e6..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.sufficient_statistics.md +++ /dev/null @@ -1,27 +0,0 @@ -### `tf.nn.sufficient_statistics(x, axes, shift=None, keep_dims=False, name=None)` {#sufficient_statistics} - -Calculate the sufficient statistics for the mean and variance of `x`. - -These sufficient statistics are computed using the one pass algorithm on -an input that's optionally shifted. See: -https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Computing_shifted_data - -##### Args: - - -* `x`: A `Tensor`. -* `axes`: Array of ints. Axes along which to compute mean and variance. -* `shift`: A `Tensor` containing the value by which to shift the data for - numerical stability, or `None` if no shift is to be performed. A shift - close to the true mean provides the most numerically stable results. -* `keep_dims`: produce statistics with the same dimensionality as the input. -* `name`: Name used to scope the operations that compute the sufficient stats. - -##### Returns: - - Four `Tensor` objects of the same type as `x`: - * the count (number of elements to average over). - * the (possibly shifted) sum of the elements in the array. - * the (possibly shifted) sum of squares of the elements in the array. - * the shift by which the mean must be corrected or None if `shift` is None. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.zero_fraction.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.zero_fraction.md new file mode 100644 index 0000000000..f4d126a041 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.zero_fraction.md @@ -0,0 +1,21 @@ +### `tf.nn.zero_fraction(value, name=None)` {#zero_fraction} + +Returns the fraction of zeros in `value`. + +If `value` is empty, the result is `nan`. + +This is useful in summaries to measure and report sparsity. For example, + + z = tf.Relu(...) + summ = tf.scalar_summary('sparsity', tf.nn.zero_fraction(z)) + +##### Args: + + +* `value`: A tensor of numeric type. +* `name`: A name for the operation (optional). + +##### Returns: + + The fraction of zeros in `value`, with type `float32`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.ones.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.ones.md new file mode 100644 index 0000000000..8a4c9073d0 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.ones.md @@ -0,0 +1,24 @@ +### `tf.ones(shape, dtype=tf.float32, name=None)` {#ones} + +Creates a tensor with all elements set to 1. + +This operation returns a tensor of type `dtype` with shape `shape` and all +elements set to 1. + +For example: + +```python +tf.ones([2, 3], int32) ==> [[1, 1, 1], [1, 1, 1]] +``` + +##### Args: + + +* `shape`: Either a list of integers, or a 1-D `Tensor` of type `int32`. +* `dtype`: The type of an element in the resulting `Tensor`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` with all elements set to 1. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.python_io.tf_record_iterator.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.python_io.tf_record_iterator.md deleted file mode 100644 index f5e90ea422..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.python_io.tf_record_iterator.md +++ /dev/null @@ -1,18 +0,0 @@ -### `tf.python_io.tf_record_iterator(path)` {#tf_record_iterator} - -An iterator that read the records from a TFRecords file. - -##### Args: - - -* `path`: The path to the TFRecords file. - -##### Yields: - - Strings. - -##### Raises: - - -* `IOError`: If `path` cannot be opened for reading. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.random_crop.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.random_crop.md new file mode 100644 index 0000000000..d389872919 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.random_crop.md @@ -0,0 +1,25 @@ +### `tf.random_crop(value, size, seed=None, name=None)` {#random_crop} + +Randomly crops a tensor to a given size. + +Slices a shape `size` portion out of `value` at a uniformly chosen offset. +Requires `value.shape >= size`. + +If a dimension should not be cropped, pass the full size of that dimension. +For example, RGB images can be cropped with +`size = [crop_height, crop_width, 3]`. + +##### Args: + + +* `value`: Input tensor to crop. +* `size`: 1-D tensor with size the rank of `value`. +* `seed`: Python integer. Used to create a random seed. See + [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) + for behavior. +* `name`: A name for this operation (optional). + +##### Returns: + + A cropped tensor of the same rank as `value` and shape `size`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.random_uniform_initializer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.random_uniform_initializer.md new file mode 100644 index 0000000000..1afd318d3b --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.random_uniform_initializer.md @@ -0,0 +1,25 @@ +### `tf.random_uniform_initializer(minval=0.0, maxval=1.0, seed=None, dtype=tf.float32)` {#random_uniform_initializer} + +Returns an initializer that generates tensors with a uniform distribution. + +##### Args: + + +* `minval`: a python scalar or a scalar tensor. lower bound of the range + of random values to generate. +* `maxval`: a python scalar or a scalar tensor. upper bound of the range + of random values to generate. +* `seed`: A Python integer. Used to create random seeds. See + [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) + for behavior. +* `dtype`: The data type. Only floating point types are supported. + +##### Returns: + + An initializer that generates tensors with a uniform distribution. + +##### Raises: + + +* `ValueError`: if `dtype` is not a floating point type. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.range.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.range.md deleted file mode 100644 index c33825d3be..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.range.md +++ /dev/null @@ -1,37 +0,0 @@ -### `tf.range(start, limit=None, delta=1, name='range')` {#range} - -Creates a sequence of integers. - -Creates a sequence of integers that begins at `start` and extends by -increments of `delta` up to but not including `limit`. - -Like the Python builtin `range`, `start` defaults to 0, so that -`range(n) = range(0, n)`. - -For example: - -``` -# 'start' is 3 -# 'limit' is 18 -# 'delta' is 3 -tf.range(start, limit, delta) ==> [3, 6, 9, 12, 15] - -# 'limit' is 5 -tf.range(limit) ==> [0, 1, 2, 3, 4] -``` - -##### Args: - - -* `start`: A 0-D (scalar) of type `int32`. First entry in sequence. - Defaults to 0. -* `limit`: A 0-D (scalar) of type `int32`. Upper limit of sequence, - exclusive. -* `delta`: A 0-D `Tensor` (scalar) of type `int32`. Optional. Default is 1. - Number that increments `start`. -* `name`: A name for the operation (optional). - -##### Returns: - - An 1-D `int32` `Tensor`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.reduce_all.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.reduce_all.md deleted file mode 100644 index 3137d5c49e..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.reduce_all.md +++ /dev/null @@ -1,35 +0,0 @@ -### `tf.reduce_all(input_tensor, reduction_indices=None, keep_dims=False, name=None)` {#reduce_all} - -Computes the "logical and" of elements across dimensions of a tensor. - -Reduces `input_tensor` along the dimensions given in `reduction_indices`. -Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each -entry in `reduction_indices`. If `keep_dims` is true, the reduced dimensions -are retained with length 1. - -If `reduction_indices` has no entries, all dimensions are reduced, and a -tensor with a single element is returned. - -For example: - -```python -# 'x' is [[True, True] -# [False, False]] -tf.reduce_all(x) ==> False -tf.reduce_all(x, 0) ==> [False, False] -tf.reduce_all(x, 1) ==> [True, False] -``` - -##### Args: - - -* `input_tensor`: The boolean tensor to reduce. -* `reduction_indices`: The dimensions to reduce. If `None` (the default), - reduces all dimensions. -* `keep_dims`: If true, retains reduced dimensions with length 1. -* `name`: A name for the operation (optional). - -##### Returns: - - The reduced tensor. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.reverse_sequence.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.reverse_sequence.md deleted file mode 100644 index fac4ac2ebe..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.reverse_sequence.md +++ /dev/null @@ -1,76 +0,0 @@ -### `tf.reverse_sequence(input, seq_lengths, seq_dim, batch_dim=None, name=None)` {#reverse_sequence} - -Reverses variable length slices. - -This op first slices `input` along the dimension `batch_dim`, and for each -slice `i`, reverses the first `seq_lengths[i]` elements along -the dimension `seq_dim`. - -The elements of `seq_lengths` must obey `seq_lengths[i] < input.dims[seq_dim]`, -and `seq_lengths` must be a vector of length `input.dims[batch_dim]`. - -The output slice `i` along dimension `batch_dim` is then given by input -slice `i`, with the first `seq_lengths[i]` slices along dimension -`seq_dim` reversed. - -For example: - -```prettyprint -# Given this: -batch_dim = 0 -seq_dim = 1 -input.dims = (4, 8, ...) -seq_lengths = [7, 2, 3, 5] - -# then slices of input are reversed on seq_dim, but only up to seq_lengths: -output[0, 0:7, :, ...] = input[0, 7:0:-1, :, ...] -output[1, 0:2, :, ...] = input[1, 2:0:-1, :, ...] -output[2, 0:3, :, ...] = input[2, 3:0:-1, :, ...] -output[3, 0:5, :, ...] = input[3, 5:0:-1, :, ...] - -# while entries past seq_lens are copied through: -output[0, 7:, :, ...] = input[0, 7:, :, ...] -output[1, 2:, :, ...] = input[1, 2:, :, ...] -output[2, 3:, :, ...] = input[2, 3:, :, ...] -output[3, 2:, :, ...] = input[3, 2:, :, ...] -``` - -In contrast, if: - -```prettyprint -# Given this: -batch_dim = 2 -seq_dim = 0 -input.dims = (8, ?, 4, ...) -seq_lengths = [7, 2, 3, 5] - -# then slices of input are reversed on seq_dim, but only up to seq_lengths: -output[0:7, :, 0, :, ...] = input[7:0:-1, :, 0, :, ...] -output[0:2, :, 1, :, ...] = input[2:0:-1, :, 1, :, ...] -output[0:3, :, 2, :, ...] = input[3:0:-1, :, 2, :, ...] -output[0:5, :, 3, :, ...] = input[5:0:-1, :, 3, :, ...] - -# while entries past seq_lens are copied through: -output[7:, :, 0, :, ...] = input[7:, :, 0, :, ...] -output[2:, :, 1, :, ...] = input[2:, :, 1, :, ...] -output[3:, :, 2, :, ...] = input[3:, :, 2, :, ...] -output[2:, :, 3, :, ...] = input[2:, :, 3, :, ...] -``` - -##### Args: - - -* `input`: A `Tensor`. The input to reverse. -* `seq_lengths`: A `Tensor` of type `int64`. - 1-D with length `input.dims(batch_dim)` and - `max(seq_lengths) < input.dims(seq_dim)` -* `seq_dim`: An `int`. The dimension which is partially reversed. -* `batch_dim`: An optional `int`. Defaults to `0`. - The dimension along which reversal is performed. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `input`. - The partially reversed input. It has the same shape as `input`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.sign.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.sign.md new file mode 100644 index 0000000000..f0c021a741 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.sign.md @@ -0,0 +1,18 @@ +### `tf.sign(x, name=None)` {#sign} + +Returns an element-wise indication of the sign of a number. + +`y = sign(x) = -1` if `x < 0`; 0 if `x == 0`; 1 if `x > 0`. + +For complex numbers, `y = sign(x) = x / |x|` if `x != 0`, otherwise `y = 0`. + +##### Args: + + +* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `x`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.sin.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.sin.md new file mode 100644 index 0000000000..aeeaf0c7e6 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.sin.md @@ -0,0 +1,14 @@ +### `tf.sin(x, name=None)` {#sin} + +Computes sin of x element-wise. + +##### Args: + + +* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `x`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.space_to_depth.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.space_to_depth.md deleted file mode 100644 index 68706d2e5a..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.space_to_depth.md +++ /dev/null @@ -1,87 +0,0 @@ -### `tf.space_to_depth(input, block_size, name=None)` {#space_to_depth} - -SpaceToDepth for tensors of type T. - -Rearranges blocks of spatial data, into depth. More specifically, -this op outputs a copy of the input tensor where values from the `height` -and `width` dimensions are moved to the `depth` dimension. -The attr `block_size` indicates the input block size and how the data is moved. - - * Non-overlapping blocks of size `block_size x block size` are rearranged - into depth at each location. - * The depth of the output tensor is `input_depth * block_size * block_size`. - * The input tensor's height and width must be divisible by block_size. - -That is, assuming the input is in the shape: -`[batch, height, width, depth]`, -the shape of the output will be: -`[batch, height/block_size, width/block_size, depth*block_size*block_size]` - -This operation requires that the input tensor be of rank 4, and that -`block_size` be >=1 and a divisor of both the input `height` and `width`. - -This operation is useful for resizing the activations between convolutions -(but keeping all data), e.g. instead of pooling. It is also useful for training -purely convolutional models. - -For example, given this input of shape `[1, 2, 2, 1]`, and block_size of 2: - -```prettyprint -x = [[[[1], [2]], - [[3], [4]]]] -``` - -This operation will output a tensor of shape `[1, 1, 1, 4]`: - -```prettyprint -[[[[1, 2, 3, 4]]]] -``` - -Here, the input has a batch of 1 and each batch element has shape `[2, 2, 1]`, -the corresponding output will have a single element (i.e. width and height are -both 1) and will have a depth of 4 channels (1 * block_size * block_size). -The output element shape is `[1, 1, 4]`. - -For an input tensor with larger depth, here of shape `[1, 2, 2, 3]`, e.g. - -```prettyprint -x = [[[[1, 2, 3], [4, 5, 6]], - [[7, 8, 9], [10, 11, 12]]]] -``` - -This operation, for block_size of 2, will return the following tensor of shape -`[1, 1, 1, 12]` - -```prettyprint -[[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]] -``` - -Similarly, for the following input of shape `[1 4 4 1]`, and a block size of 2: - -```prettyprint -x = [[[[1], [2], [5], [6]], - [[3], [4], [7], [8]], - [[9], [10], [13], [14]], - [[11], [12], [15], [16]]]] -``` - -the operator will return the following tensor of shape `[1 2 2 4]`: - -```prettyprint -x = [[[[1, 2, 3, 4], - [5, 6, 7, 8]], - [[9, 10, 11, 12], - [13, 14, 15, 16]]]] -``` - -##### Args: - - -* `input`: A `Tensor`. -* `block_size`: An `int`. The size of the spatial block. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `input`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.sparse_mask.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.sparse_mask.md deleted file mode 100644 index d2fa38733b..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.sparse_mask.md +++ /dev/null @@ -1,39 +0,0 @@ -### `tf.sparse_mask(a, mask_indices, name=None)` {#sparse_mask} - -Masks elements of `IndexedSlices`. - -Given an `IndexedSlices` instance `a`, returns another `IndexedSlices` that -contains a subset of the slices of `a`. Only the slices at indices specified -in `mask_indices` are returned. - -This is useful when you need to extract a subset of slices in an -`IndexedSlices` object. - -For example: - -```python -# `a` contains slices at indices [12, 26, 37, 45] from a large tensor -# with shape [1000, 10] -a.indices => [12, 26, 37, 45] -tf.shape(a.values) => [4, 10] - -# `b` will be the subset of `a` slices at its second and third indices, so -# we want to mask of its first and last indices (which are at absolute -# indices 12, 45) -b = tf.sparse_mask(a, [12, 45]) - -b.indices => [26, 37] -tf.shape(b.values) => [2, 10] - -``` - -##### Args: - - * `a`: An `IndexedSlices` instance. - * `mask_indices`: Indices of elements to mask. - * `name`: A name for the operation (optional). - -##### Returns: - - The masked `IndexedSlices` instance. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.string_to_hash_bucket_strong.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.string_to_hash_bucket_strong.md new file mode 100644 index 0000000000..67cf3b6fd9 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.string_to_hash_bucket_strong.md @@ -0,0 +1,30 @@ +### `tf.string_to_hash_bucket_strong(input, num_buckets, key, name=None)` {#string_to_hash_bucket_strong} + +Converts each string in the input Tensor to its hash mod by a number of buckets. + +The hash function is deterministic on the content of the string within the +process. The hash function is a keyed hash function, where attribute `key` +defines the key of the hash function. `key` is an array of 2 elements. + +A strong hash is important when inputs may be malicious, e.g. URLs with +additional components. Adversaries could try to make their inputs hash to the +same bucket for a denial-of-service attack or to skew the results. A strong +hash prevents this by making it dificult, if not infeasible, to compute inputs +that hash to the same bucket. This comes at a cost of roughly 4x higher compute +time than tf.string_to_hash_bucket_fast. + +##### Args: + + +* `input`: A `Tensor` of type `string`. The strings to assign a hash bucket. +* `num_buckets`: An `int` that is `>= 1`. The number of buckets. +* `key`: A list of `ints`. + The key for the keyed hash function passed as a list of two uint64 + elements. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `int64`. + A Tensor of the same shape as the input `string_tensor`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.tanh.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.tanh.md new file mode 100644 index 0000000000..b41e51c019 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.tanh.md @@ -0,0 +1,16 @@ +### `tf.tanh(x, name=None)` {#tanh} + +Computes hyperbolic tangent of `x` element-wise. + +##### Args: + + +* `x`: A Tensor with type `float`, `double`, `int32`, `complex64`, `int64`, + or `qint32`. +* `name`: A name for the operation (optional). + +##### Returns: + + A Tensor with the same type as `x` if `x.dtype != qint32` otherwise + the return type is `quint8`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.test.main.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.test.main.md deleted file mode 100644 index c7aa9cf801..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.test.main.md +++ /dev/null @@ -1,4 +0,0 @@ -### `tf.test.main()` {#main} - -Runs all unit tests. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.to_double.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.to_double.md deleted file mode 100644 index 0cabea178e..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.to_double.md +++ /dev/null @@ -1,19 +0,0 @@ -### `tf.to_double(x, name='ToDouble')` {#to_double} - -Casts a tensor to type `float64`. - -##### Args: - - -* `x`: A `Tensor` or `SparseTensor`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` or `SparseTensor` with same shape as `x` with type `float64`. - -##### Raises: - - -* `TypeError`: If `x` cannot be cast to the `float64`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.to_float.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.to_float.md deleted file mode 100644 index b45b49b982..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.to_float.md +++ /dev/null @@ -1,19 +0,0 @@ -### `tf.to_float(x, name='ToFloat')` {#to_float} - -Casts a tensor to type `float32`. - -##### Args: - - -* `x`: A `Tensor` or `SparseTensor`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` or `SparseTensor` with same shape as `x` with type `float32`. - -##### Raises: - - -* `TypeError`: If `x` cannot be cast to the `float32`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.trace.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.trace.md deleted file mode 100644 index 3b1e71fda1..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.trace.md +++ /dev/null @@ -1,29 +0,0 @@ -### `tf.trace(x, name=None)` {#trace} - -Compute the trace of a tensor `x`. - -`trace(x)` returns the sum of along the diagonal. - -For example: - -```python -# 'x' is [[1, 1], -# [1, 1]] -tf.trace(x) ==> 2 - -# 'x' is [[1,2,3], -# [4,5,6], -# [7,8,9]] -tf.trace(x) ==> 15 -``` - -##### Args: - - -* `x`: 2-D tensor. -* `name`: A name for the operation (optional). - -##### Returns: - - The trace of input tensor. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.FtrlOptimizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.FtrlOptimizer.md deleted file mode 100644 index 4fe719ee6b..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.FtrlOptimizer.md +++ /dev/null @@ -1,32 +0,0 @@ -Optimizer that implements the FTRL algorithm. - -See this [paper]( -https://www.eecs.tufts.edu/~dsculley/papers/ad-click-prediction.pdf). - -- - - - -#### `tf.train.FtrlOptimizer.__init__(learning_rate, learning_rate_power=-0.5, initial_accumulator_value=0.1, l1_regularization_strength=0.0, l2_regularization_strength=0.0, use_locking=False, name='Ftrl')` {#FtrlOptimizer.__init__} - -Construct a new FTRL optimizer. - -##### Args: - - -* `learning_rate`: A float value or a constant float `Tensor`. -* `learning_rate_power`: A float value, must be less or equal to zero. -* `initial_accumulator_value`: The starting value for accumulators. - Only positive values are allowed. -* `l1_regularization_strength`: A float value, must be greater than or - equal to zero. -* `l2_regularization_strength`: A float value, must be greater than or - equal to zero. -* `use_locking`: If `True` use locks for update operations. -* `name`: Optional name prefix for the operations created when applying - gradients. Defaults to "Ftrl". - -##### Raises: - - -* `ValueError`: If one of the arguments is invalid. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.LooperThread.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.LooperThread.md new file mode 100644 index 0000000000..046f35d718 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.LooperThread.md @@ -0,0 +1,215 @@ +A thread that runs code repeatedly, optionally on a timer. + +This thread class is intended to be used with a `Coordinator`. It repeatedly +runs code specified either as `target` and `args` or by the `run_loop()` +method. + +Before each run the thread checks if the coordinator has requested stop. In +that case the looper thread terminates immediately. + +If the code being run raises an exception, that exception is reported to the +coordinator and the thread terminates. The coordinator will then request all +the other threads it coordinates to stop. + +You typically pass looper threads to the supervisor `Join()` method. +- - - + +#### `tf.train.LooperThread.__init__(coord, timer_interval_secs, target=None, args=None, kwargs=None)` {#LooperThread.__init__} + +Create a LooperThread. + +##### Args: + + +* `coord`: A Coordinator. +* `timer_interval_secs`: Time boundaries at which to call Run(), or None + if it should be called back to back. +* `target`: Optional callable object that will be executed in the thread. +* `args`: Optional arguments to pass to `target` when calling it. +* `kwargs`: Optional keyword arguments to pass to `target` when calling it. + +##### Raises: + + +* `ValueError`: If one of the arguments is invalid. + + +- - - + +#### `tf.train.LooperThread.daemon` {#LooperThread.daemon} + +A boolean value indicating whether this thread is a daemon thread (True) or not (False). + +This must be set before start() is called, otherwise RuntimeError is +raised. Its initial value is inherited from the creating thread; the +main thread is not a daemon thread and therefore all threads created in +the main thread default to daemon = False. + +The entire Python program exits when no alive non-daemon threads are +left. + + +- - - + +#### `tf.train.LooperThread.getName()` {#LooperThread.getName} + + + + +- - - + +#### `tf.train.LooperThread.ident` {#LooperThread.ident} + +Thread identifier of this thread or None if it has not been started. + +This is a nonzero integer. See the thread.get_ident() function. Thread +identifiers may be recycled when a thread exits and another thread is +created. The identifier is available even after the thread has exited. + + +- - - + +#### `tf.train.LooperThread.isAlive()` {#LooperThread.isAlive} + +Return whether the thread is alive. + +This method returns True just before the run() method starts until just +after the run() method terminates. The module function enumerate() +returns a list of all alive threads. + + +- - - + +#### `tf.train.LooperThread.isDaemon()` {#LooperThread.isDaemon} + + + + +- - - + +#### `tf.train.LooperThread.is_alive()` {#LooperThread.is_alive} + +Return whether the thread is alive. + +This method returns True just before the run() method starts until just +after the run() method terminates. The module function enumerate() +returns a list of all alive threads. + + +- - - + +#### `tf.train.LooperThread.join(timeout=None)` {#LooperThread.join} + +Wait until the thread terminates. + +This blocks the calling thread until the thread whose join() method is +called terminates -- either normally or through an unhandled exception +or until the optional timeout occurs. + +When the timeout argument is present and not None, it should be a +floating point number specifying a timeout for the operation in seconds +(or fractions thereof). As join() always returns None, you must call +isAlive() after join() to decide whether a timeout happened -- if the +thread is still alive, the join() call timed out. + +When the timeout argument is not present or None, the operation will +block until the thread terminates. + +A thread can be join()ed many times. + +join() raises a RuntimeError if an attempt is made to join the current +thread as that would cause a deadlock. It is also an error to join() a +thread before it has been started and attempts to do so raises the same +exception. + + +- - - + +#### `tf.train.LooperThread.loop(coord, timer_interval_secs, target, args=None, kwargs=None)` {#LooperThread.loop} + +Start a LooperThread that calls a function periodically. + +If `timer_interval_secs` is None the thread calls `target(args)` +repeatedly. Otherwise `target(args)` is called every `timer_interval_secs` +seconds. The thread terminates when a stop of the coordinator is +requested. + +##### Args: + + +* `coord`: A Coordinator. +* `timer_interval_secs`: Number. Time boundaries at which to call `target`. +* `target`: A callable object. +* `args`: Optional arguments to pass to `target` when calling it. +* `kwargs`: Optional keyword arguments to pass to `target` when calling it. + +##### Returns: + + The started thread. + + +- - - + +#### `tf.train.LooperThread.name` {#LooperThread.name} + +A string used for identification purposes only. + +It has no semantics. Multiple threads may be given the same name. The +initial name is set by the constructor. + + +- - - + +#### `tf.train.LooperThread.run()` {#LooperThread.run} + + + + +- - - + +#### `tf.train.LooperThread.run_loop()` {#LooperThread.run_loop} + +Called at 'timer_interval_secs' boundaries. + + +- - - + +#### `tf.train.LooperThread.setDaemon(daemonic)` {#LooperThread.setDaemon} + + + + +- - - + +#### `tf.train.LooperThread.setName(name)` {#LooperThread.setName} + + + + +- - - + +#### `tf.train.LooperThread.start()` {#LooperThread.start} + +Start the thread's activity. + +It must be called at most once per thread object. It arranges for the +object's run() method to be invoked in a separate thread of control. + +This method will raise a RuntimeError if called more than once on the +same thread object. + + +- - - + +#### `tf.train.LooperThread.start_loop()` {#LooperThread.start_loop} + +Called when the thread starts. + + +- - - + +#### `tf.train.LooperThread.stop_loop()` {#LooperThread.stop_loop} + +Called when the thread stops. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.MomentumOptimizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.MomentumOptimizer.md deleted file mode 100644 index 45256f65fc..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.MomentumOptimizer.md +++ /dev/null @@ -1,18 +0,0 @@ -Optimizer that implements the Momentum algorithm. - -- - - - -#### `tf.train.MomentumOptimizer.__init__(learning_rate, momentum, use_locking=False, name='Momentum')` {#MomentumOptimizer.__init__} - -Construct a new Momentum optimizer. - -##### Args: - - -* `learning_rate`: A `Tensor` or a floating point value. The learning rate. -* `momentum`: A `Tensor` or a floating point value. The momentum. -* `use_locking`: If `True` use locks for update operations. -* `name`: Optional name prefix for the operations created when applying - gradients. Defaults to "Momentum". - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.RMSPropOptimizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.RMSPropOptimizer.md deleted file mode 100644 index 317f1e2adf..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.RMSPropOptimizer.md +++ /dev/null @@ -1,23 +0,0 @@ -Optimizer that implements the RMSProp algorithm. - -See the [paper] -(http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf). - -- - - - -#### `tf.train.RMSPropOptimizer.__init__(learning_rate, decay=0.9, momentum=0.0, epsilon=1e-10, use_locking=False, name='RMSProp')` {#RMSPropOptimizer.__init__} - -Construct a new RMSProp optimizer. - -##### Args: - - -* `learning_rate`: A Tensor or a floating point value. The learning rate. -* `decay`: Discounting factor for the history/coming gradient -* `momentum`: A scalar tensor. -* `epsilon`: Small value to avoid zero denominator. -* `use_locking`: If True use locks for update operation. -* `name`: Optional name prefix for the operations created when applying - gradients. Defaults to "RMSProp". - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.Saver.from_proto.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.Saver.from_proto.md deleted file mode 100644 index 247f621e8a..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.Saver.from_proto.md +++ /dev/null @@ -1,4 +0,0 @@ -#### `tf.train.Saver.from_proto(saver_def)` {#Saver.from_proto} - -Returns a `Saver` object created from `saver_def`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.Server.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.Server.md new file mode 100644 index 0000000000..3f87ed3bf0 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.Server.md @@ -0,0 +1,113 @@ +An in-process TensorFlow server, for use in distributed training. + +A `tf.train.Server` instance encapsulates a set of devices and a +[`tf.Session`](../../api_docs/python/client.md#Session) target that +can participate in distributed training. A server belongs to a +cluster (specified by a [`tf.train.ClusterSpec`](#ClusterSpec)), and +corresponds to a particular task in a named job. The server can +communicate with any other server in the same cluster. + +- - - + +#### `tf.train.Server.__init__(server_or_cluster_def, job_name=None, task_index=None, protocol=None, start=True)` {#Server.__init__} + +Creates a new server with the given definition. + +The `job_name`, `task_index`, and `protocol` arguments are optional, and +override any information provided in `server_or_cluster_def`. + +##### Args: + + +* `server_or_cluster_def`: A `tf.train.ServerDef` or + `tf.train.ClusterDef` protocol buffer, or a + `tf.train.ClusterSpec` object, describing the server to be + created and/or the cluster of which it is a member. +* `job_name`: (Optional.) Specifies the name of the job of which the server + is a member. Defaults to the value in `server_or_cluster_def`, if + specified. +* `task_index`: (Optional.) Specifies the task index of the server in its + job. Defaults to the value in `server_or_cluster_def`, if specified. + Otherwise defaults to 0 if the server's job has only one task. +* `protocol`: (Optional.) Specifies the protocol to be used by the server. + Acceptable values include `"grpc"`. Defaults to the value in + `server_or_cluster_def`, if specified. Otherwise defaults to `"grpc"`. +* `start`: (Optional.) Boolean, indicating whether to start the server + after creating it. Defaults to `True`. + +##### Raises: + + tf.errors.OpError: Or one of its subclasses if an error occurs while + creating the TensorFlow server. + + +- - - + +#### `tf.train.Server.create_local_server(start=True)` {#Server.create_local_server} + +Creates a new single-process cluster running on the local host. + +This method is a convenience wrapper for creating a +`tf.train.Server` with a `tf.train.ServerDef` that specifies a +single-process cluster containing a single task in a job called +`"local"`. + +##### Args: + + +* `start`: (Optional.) Boolean, indicating whether to start the server after + creating it. Defaults to `True`. + +##### Returns: + + A local `tf.train.Server`. + + +- - - + +#### `tf.train.Server.target` {#Server.target} + +Returns the target for a `tf.Session` to connect to this server. + +To create a +[`tf.Session`](../../api_docs/python/client.md#Session) that +connects to this server, use the following snippet: + +```python +server = tf.train.Server(...) +with tf.Session(server.target): + # ... +``` + +##### Returns: + + A string containing a session target for this server. + + + +- - - + +#### `tf.train.Server.start()` {#Server.start} + +Starts this server. + +##### Raises: + + tf.errors.OpError: Or one of its subclasses if an error occurs while + starting the TensorFlow server. + + +- - - + +#### `tf.train.Server.join()` {#Server.join} + +Blocks until the server has shut down. + +This method currently blocks forever. + +##### Raises: + + tf.errors.OpError: Or one of its subclasses if an error occurs while + joining the TensorFlow server. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.SummaryWriter.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.SummaryWriter.md deleted file mode 100644 index a7f5aef5f1..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.SummaryWriter.md +++ /dev/null @@ -1,170 +0,0 @@ -Writes `Summary` protocol buffers to event files. - -The `SummaryWriter` class provides a mechanism to create an event file in a -given directory and add summaries and events to it. The class updates the -file contents asynchronously. This allows a training program to call methods -to add data to the file directly from the training loop, without slowing down -training. - -- - - - -#### `tf.train.SummaryWriter.__init__(logdir, graph=None, max_queue=10, flush_secs=120, graph_def=None)` {#SummaryWriter.__init__} - -Creates a `SummaryWriter` and an event file. - -On construction the summary writer creates a new event file in `logdir`. -This event file will contain `Event` protocol buffers constructed when you -call one of the following functions: `add_summary()`, `add_session_log()`, -`add_event()`, or `add_graph()`. - -If you pass a `Graph` to the constructor it is added to -the event file. (This is equivalent to calling `add_graph()` later). - -TensorBoard will pick the graph from the file and display it graphically so -you can interactively explore the graph you built. You will usually pass -the graph from the session in which you launched it: - -```python -...create a graph... -# Launch the graph in a session. -sess = tf.Session() -# Create a summary writer, add the 'graph' to the event file. -writer = tf.train.SummaryWriter(, sess.graph) -``` - -The other arguments to the constructor control the asynchronous writes to -the event file: - -* `flush_secs`: How often, in seconds, to flush the added summaries - and events to disk. -* `max_queue`: Maximum number of summaries or events pending to be - written to disk before one of the 'add' calls block. - -##### Args: - - -* `logdir`: A string. Directory where event file will be written. -* `graph`: A `Graph` object, such as `sess.graph`. -* `max_queue`: Integer. Size of the queue for pending events and summaries. -* `flush_secs`: Number. How often, in seconds, to flush the - pending events and summaries to disk. -* `graph_def`: DEPRECATED: Use the `graph` argument instead. - - - -- - - - -#### `tf.train.SummaryWriter.add_summary(summary, global_step=None)` {#SummaryWriter.add_summary} - -Adds a `Summary` protocol buffer to the event file. - -This method wraps the provided summary in an `Event` protocol buffer -and adds it to the event file. - -You can pass the result of evaluating any summary op, using -[`Session.run()`](client.md#Session.run) or -[`Tensor.eval()`](framework.md#Tensor.eval), to this -function. Alternatively, you can pass a `tf.Summary` protocol -buffer that you populate with your own data. The latter is -commonly done to report evaluation results in event files. - -##### Args: - - -* `summary`: A `Summary` protocol buffer, optionally serialized as a string. -* `global_step`: Number. Optional global step value to record with the - summary. - - -- - - - -#### `tf.train.SummaryWriter.add_session_log(session_log, global_step=None)` {#SummaryWriter.add_session_log} - -Adds a `SessionLog` protocol buffer to the event file. - -This method wraps the provided session in an `Event` procotol buffer -and adds it to the event file. - -##### Args: - - -* `session_log`: A `SessionLog` protocol buffer. -* `global_step`: Number. Optional global step value to record with the - summary. - - -- - - - -#### `tf.train.SummaryWriter.add_event(event)` {#SummaryWriter.add_event} - -Adds an event to the event file. - -##### Args: - - -* `event`: An `Event` protocol buffer. - - -- - - - -#### `tf.train.SummaryWriter.add_graph(graph, global_step=None, graph_def=None)` {#SummaryWriter.add_graph} - -Adds a `Graph` to the event file. - -The graph described by the protocol buffer will be displayed by -TensorBoard. Most users pass a graph in the constructor instead. - -##### Args: - - -* `graph`: A `Graph` object, such as `sess.graph`. -* `global_step`: Number. Optional global step counter to record with the - graph. -* `graph_def`: DEPRECATED. Use the `graph` parameter instead. - -##### Raises: - - -* `ValueError`: If both graph and graph_def are passed to the method. - - -- - - - -#### `tf.train.SummaryWriter.add_run_metadata(run_metadata, tag, global_step=None)` {#SummaryWriter.add_run_metadata} - -Adds a metadata information for a single session.run() call. - -##### Args: - - -* `run_metadata`: A `RunMetadata` protobuf object. -* `tag`: The tag name for this metadata. -* `global_step`: Number. Optional global step counter to record with the - StepStats. - -##### Raises: - - -* `ValueError`: If the provided tag was already used for this type of event. - - - -- - - - -#### `tf.train.SummaryWriter.flush()` {#SummaryWriter.flush} - -Flushes the event file to disk. - -Call this method to make sure that all pending events have been written to -disk. - - -- - - - -#### `tf.train.SummaryWriter.close()` {#SummaryWriter.close} - -Flushes the event file to disk and close the file. - -Call this method when you do not need the summary writer anymore. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.Supervisor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.Supervisor.md deleted file mode 100644 index b3d17eac2d..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.Supervisor.md +++ /dev/null @@ -1,845 +0,0 @@ -A training helper that checkpoints models and computes summaries. - -The Supervisor is a small wrapper around a `Coordinator`, a `Saver`, -and a `SessionManager` that takes care of common needs of Tensorflow -training programs. - -#### Use for a single program - -```python -with tf.Graph().as_default(): - ...add operations to the graph... - # Create a Supervisor that will checkpoint the model in '/tmp/mydir'. - sv = Supervisor(logdir='/tmp/mydir') - # Get a Tensorflow session managed by the supervisor. - with sv.managed_session(FLAGS.master) as sess: - # Use the session to train the graph. - while not sv.should_stop(): - sess.run() -``` - -Within the `with sv.managed_session()` block all variables in the graph have -been initialized. In addition, a few services have been started to -checkpoint the model and add summaries to the event log. - -If the program crashes and is restarted, the managed session automatically -reinitialize variables from the most recent checkpoint. - -The supervisor is notified of any exception raised by one of the services. -After an exception is raised, `should_stop()` returns `True`. In that case -the training loop should also stop. This is why the training loop has to -check for `sv.should_stop()`. - -Exceptions that indicate that the training inputs have been exhausted, -`tf.errors.OutOfRangeError`, also cause `sv.should_stop()` to return `True` -but are not re-raised from the `with` block: they indicate a normal -termination. - -#### Use for multiple replicas - -To train with replicas you deploy the same program in a `Cluster`. -One of the tasks must be identified as the *chief*: the task that handles -initialization, checkpoints, summaries, and recovery. The other tasks -depend on the *chief* for these services. - -The only change you have to do to the single program code is to indicate -if the program is running as the *chief*. - -```python -# Choose a task as the chief. This could be based on server_def.task_index, -# or job_def.name, or job_def.tasks. It's entirely up to the end user. -# But there can be only one *chief*. -is_chief = (server_def.task_index == 0) -server = tf.train.Server(server_def) - -with tf.Graph().as_default(): - ...add operations to the graph... - # Create a Supervisor that uses log directory on a shared file system. - # Indicate if you are the 'chief' - sv = Supervisor(logdir='/shared_directory/...', is_chief=is_chief) - # Get a Session in a TensorFlow server on the cluster. - with sv.managed_session(server.target) as sess: - # Use the session to train the graph. - while not sv.should_stop(): - sess.run() -``` - -In the *chief* task, the `Supervisor` works exactly as in the first example -above. In the other tasks `sv.managed_session()` waits for the Model to have -been intialized before returning a session to the training code. The -non-chief tasks depend on the chief taks for initializing the model. - -If one of the tasks crashes and restarts, `managed_session()` -checks if the Model is initialized. If yes, it just creates a session and -returns it to the training code that proceeds normally. If the model needs -to be initialized, the chief task takes care of reinitializing it; the other -tasks just wait for the model to have been initialized. - -NOTE: This modified program still works fine as a single program. -The single program marks itself as the chief. - -#### What `master` string to use - -Whether you are running on your machine or in the cluster you can use the -following values for the --master flag: - -* Specifying `''` requests an in-process session that does not use RPC. - -* Specifying `'local'` requests a session that uses the RPC-based - "Master interface" to run TensorFlow programs. See - [`tf.train.Server.create_local_server()`](#Server.create_local_server) for - details. - -* Specifying `'grpc://hostname:port'` requests a session that uses - the RPC interface to a specific , and also allows the in-process - master to access remote tensorflow workers. Often, it is - appropriate to pass `server.target` (for some `tf.train.Server` - named `server). - -#### Advanced use - -##### Launching additional services - -`managed_session()` launches the Checkpoint and Summary services (threads). -If you need more services to run you can simply launch them in the block -controlled by `managed_session()`. - -Example: Start a thread to print losses. We want this thread to run -every 60 seconds, so we launch it with `sv.loop()`. - - ```python - ... - sv = Supervisor(logdir='/tmp/mydir') - with sv.managed_session(FLAGS.master) as sess: - sv.loop(60, print_loss, (sess)) - while not sv.should_stop(): - sess.run(my_train_op) - ``` - -##### Launching fewer services - -`managed_session()` launches the "summary" and "checkpoint" threads which use -either the optionally `summary_op` and `saver` passed to the constructor, or -default ones created automatically by the supervisor. If you want to run -your own summary and checkpointing logic, disable these services by passing -`None` to the `summary_op` and `saver` parameters. - -Example: Create summaries manually every 100 steps in the chief. - - ```python - # Create a Supervisor with no automatic summaries. - sv = Supervisor(logdir='/tmp/mydir', is_chief=is_chief, summary_op=None) - # As summary_op was None, managed_session() does not start the - # summary thread. - with sv.managed_session(FLAGS.master) as sess: - for step in xrange(1000000): - if sv.should_stop(): - break - if is_chief and step % 100 == 0: - # Create the summary every 100 chief steps. - sv.summary_computed(sess, sess.run(my_summary_op)) - else: - # Train normally - sess.run(my_train_op) - ``` - -##### Custom model initialization - -`managed_session()` only supports initializing the model by running an -`init_op` or restoring from the latest checkpoint. If you have special -initialization needs, see how to specify a `local_init_op` when creating the -supervisor. You can also use the `SessionManager` directly to create a -session and check if it could be initialized automatically. - -- - - - -#### `tf.train.Supervisor.__init__(graph=None, ready_op=0, is_chief=True, init_op=0, init_feed_dict=None, local_init_op=0, logdir=None, summary_op=0, saver=0, global_step=0, save_summaries_secs=120, save_model_secs=600, recovery_wait_secs=30, stop_grace_secs=120, checkpoint_basename='model.ckpt', session_manager=None, summary_writer=0, init_fn=None)` {#Supervisor.__init__} - -Create a `Supervisor`. - -##### Args: - - -* `graph`: A `Graph`. The graph that the model will use. Defaults to the - default `Graph`. The supervisor may add operations to the graph before - creating a session, but the graph should not be modified by the caller - after passing it to the supervisor. -* `ready_op`: 1-D string `Tensor`. This tensor is evaluated by supervisors in - `prepare_or_wait_for_session()` to check if the model is ready to use. - The model is considered ready if it returns an empty array. Defaults to - the tensor returned from `tf.report_uninitialized_variables()` If - `None`, the model is not checked for readiness. -* `is_chief`: If True, create a chief supervisor in charge of initializing - and restoring the model. If False, create a supervisor that relies - on a chief supervisor for inits and restore. -* `init_op`: `Operation`. Used by chief supervisors to initialize the model - when it can not be recovered. Defaults to an `Operation` that - initializes all variables. If `None`, no initialization is done - automatically unless you pass a value for `init_fn`, see below. -* `init_feed_dict`: A dictionary that maps `Tensor` objects to feed values. - This feed dictionary will be used when `init_op` is evaluated. -* `local_init_op`: `Operation`. Used by all supervisors to run initializations - that should run for every new supervisor instance. By default these - are table initializers and initializers for local variables. - If `None`, no further per supervisor-instance initialization is - done automatically. -* `logdir`: A string. Optional path to a directory where to checkpoint the - model and log events for the visualizer. Used by chief supervisors. - The directory will be created if it does not exist. -* `summary_op`: An `Operation` that returns a Summary for the event logs. - Used by chief supervisors if a `logdir` was specified. Defaults to the - operation returned from merge_all_summaries(). If `None`, summaries are - not computed automatically. -* `saver`: A Saver object. Used by chief supervisors if a `logdir` was - specified. Defaults to the saved returned by Saver(). - If `None`, the model is not saved automatically. -* `global_step`: An integer Tensor of size 1 that counts steps. The value - from 'global_step' is used in summaries and checkpoint filenames. - Default to the op named 'global_step' in the graph if it exists, is of - rank 1, size 1, and of type tf.int32 ot tf.int64. If `None` the global - step is not recorded in summaries and checkpoint files. Used by chief - supervisors if a `logdir` was specified. -* `save_summaries_secs`: Number of seconds between the computation of - summaries for the event log. Defaults to 120 seconds. Pass 0 to - disable summaries. -* `save_model_secs`: Number of seconds between the creation of model - checkpoints. Defaults to 600 seconds. Pass 0 to disable checkpoints. -* `recovery_wait_secs`: Number of seconds between checks that the model - is ready. Used by supervisors when waiting for a chief supervisor - to initialize or restore the model. Defaults to 30 seconds. -* `stop_grace_secs`: Grace period, in seconds, given to running threads to - stop when `stop()` is called. Defaults to 120 seconds. -* `checkpoint_basename`: The basename for checkpoint saving. -* `session_manager`: `SessionManager`, which manages Session creation and - recovery. If it is `None`, a default `SessionManager` will be created - with the set of arguments passed in for backwards compatibility. -* `summary_writer`: `SummaryWriter` to use or `USE_DEFAULT`. Can be `None` - to indicate that no summaries should be written. -* `init_fn`: Optional callable used to initialize the model. Called - after the optional `init_op` is called. The callable must accept one - argument, the session being initialized. - -##### Returns: - - A `Supervisor`. - - -- - - - -#### `tf.train.Supervisor.managed_session(master='', config=None, start_standard_services=True, close_summary_writer=True)` {#Supervisor.managed_session} - -Returns a context manager for a managed session. - -This context manager creates and automatically recovers a session. It -optionally starts the standard services that handle checkpoints and -summaries. It monitors exceptions raised from the `with` block or from the -services and stops the supervisor as needed. - -The context manager is typically used as follows: - -```python -def train(): - sv = tf.train.Supervisor(...) - with sv.managed_session() as sess: - for step in xrange(..): - if sv.should_stop(): - break - sess.run() - ...do other things needed at each training step... -``` - -An exception raised from the `with` block or one of the service threads is -raised again when the block exits. This is done after stopping all threads -and closing the session. For example, an `AbortedError` exception, raised -in case of preemption of one of the workers in a distributed model, is -raised again when the block exits. - -If you want to retry the training loop in case of preemption you can do it -as follows: - -```python -def main(...): - while True - try: - train() - except tf.errors.Aborted: - pass -``` - -As a special case, exceptions used for control flow, such as -`OutOfRangeError` which reports that input queues are exhausted, are not -raised again from the `with` block: they indicate a clean termination of -the training loop and are considered normal termination. - -##### Args: - - -* `master`: name of the TensorFlow master to use. See the `tf.Session` - constructor for how this is interpreted. -* `config`: Optional `ConfigProto` proto used to configure the session. - Passed as-is to create the session. -* `start_standard_services`: Whether to start the standard services, - such as checkpoint, summary and step counter. -* `close_summary_writer`: Whether to close the summary writer when - closing the session. Defaults to True. - -##### Returns: - - A context manager that yields a `Session` restored from the latest - checkpoint or initialized from scratch if not checkpoint exists. The - session is closed when the `with` block exits. - - -- - - - -#### `tf.train.Supervisor.prepare_or_wait_for_session(master='', config=None, wait_for_checkpoint=False, max_wait_secs=7200, start_standard_services=True)` {#Supervisor.prepare_or_wait_for_session} - -Make sure the model is ready to be used. - -Create a session on 'master', recovering or initializing the model as -needed, or wait for a session to be ready. If running as the chief -and `start_standard_service` is set to True, also call the session -manager to start the standard services. - -##### Args: - - -* `master`: name of the TensorFlow master to use. See the `tf.Session` - constructor for how this is interpreted. -* `config`: Optional ConfigProto proto used to configure the session, - which is passed as-is to create the session. -* `wait_for_checkpoint`: Whether we should wait for the availability of a - checkpoint before creating Session. Defaults to False. -* `max_wait_secs`: Maximum time to wait for the session to become available. -* `start_standard_services`: Whether to start the standard services and the - queue runners. - -##### Returns: - - A Session object that can be used to drive the model. - - -- - - - -#### `tf.train.Supervisor.start_standard_services(sess)` {#Supervisor.start_standard_services} - -Start the standard services for 'sess'. - -This starts services in the background. The services started depend -on the parameters to the constructor and may include: - - - A Summary thread computing summaries every save_summaries_secs. - - A Checkpoint thread saving the model every save_model_secs. - - A StepCounter thread measure step time. - -##### Args: - - -* `sess`: A Session. - -##### Returns: - - A list of threads that are running the standard services. You can use - the Supervisor's Coordinator to join these threads with: - sv.coord.Join() - -##### Raises: - - -* `RuntimeError`: If called with a non-chief Supervisor. -* `ValueError`: If not `logdir` was passed to the constructor as the - services need a log directory. - - -- - - - -#### `tf.train.Supervisor.start_queue_runners(sess, queue_runners=None)` {#Supervisor.start_queue_runners} - -Start threads for `QueueRunners`. - -Note that the queue runners collected in the graph key `QUEUE_RUNNERS` -are already started automatically when you create a session with the -supervisor, so unless you have non-collected queue runners to start -you do not need to call this explicitely. - -##### Args: - - -* `sess`: A `Session`. -* `queue_runners`: A list of `QueueRunners`. If not specified, we'll use the - list of queue runners gathered in the graph under the key - `GraphKeys.QUEUE_RUNNERS`. - -##### Returns: - - The list of threads started for the `QueueRunners`. - - -- - - - -#### `tf.train.Supervisor.summary_computed(sess, summary, global_step=None)` {#Supervisor.summary_computed} - -Indicate that a summary was computed. - -##### Args: - - -* `sess`: A `Session` object. -* `summary`: A Summary proto, or a string holding a serialized summary proto. -* `global_step`: Int. global step this summary is associated with. If `None`, - it will try to fetch the current step. - -##### Raises: - - -* `TypeError`: if 'summary' is not a Summary proto or a string. -* `RuntimeError`: if the Supervisor was created without a `logdir`. - - - -- - - - -#### `tf.train.Supervisor.stop(threads=None, close_summary_writer=True)` {#Supervisor.stop} - -Stop the services and the coordinator. - -This does not close the session. - -##### Args: - - -* `threads`: Optional list of threads to join with the coordinator. If - `None`, defaults to the threads running the standard services, the - threads started for `QueueRunners`, and the threads started by the - `loop()` method. To wait on additional threads, pass the - list in this parameter. -* `close_summary_writer`: Whether to close the `summary_writer`. Defaults to - `True` if the summary writer was created by the supervisor, `False` - otherwise. - - -- - - - -#### `tf.train.Supervisor.request_stop(ex=None)` {#Supervisor.request_stop} - -Request that the coordinator stop the threads. - -See `Coordinator.request_stop()`. - -##### Args: - - -* `ex`: Optional `Exception`, or Python `exc_info` tuple as returned by - `sys.exc_info()`. If this is the first call to `request_stop()` the - corresponding exception is recorded and re-raised from `join()`. - - -- - - - -#### `tf.train.Supervisor.should_stop()` {#Supervisor.should_stop} - -Check if the coordinator was told to stop. - -See `Coordinator.should_stop()`. - -##### Returns: - - True if the coordinator was told to stop, False otherwise. - - -- - - - -#### `tf.train.Supervisor.stop_on_exception()` {#Supervisor.stop_on_exception} - -Context handler to stop the supervisor when an exception is raised. - -See `Coordinator.stop_on_exception()`. - -##### Returns: - - A context handler. - - -- - - - -#### `tf.train.Supervisor.wait_for_stop()` {#Supervisor.wait_for_stop} - -Block waiting for the coordinator to stop. - - - -#### Other Methods -- - - - -#### `tf.train.Supervisor.Loop(timer_interval_secs, target, args=None, kwargs=None)` {#Supervisor.Loop} - -Start a LooperThread that calls a function periodically. - -If `timer_interval_secs` is None the thread calls `target(*args, **kwargs)` -repeatedly. Otherwise it calls it every `timer_interval_secs` -seconds. The thread terminates when a stop is requested. - -The started thread is added to the list of threads managed by the supervisor -so it does not need to be passed to the `stop()` method. - -##### Args: - - -* `timer_interval_secs`: Number. Time boundaries at which to call `target`. -* `target`: A callable object. -* `args`: Optional arguments to pass to `target` when calling it. -* `kwargs`: Optional keyword arguments to pass to `target` when calling it. - -##### Returns: - - The started thread. - - -- - - - -#### `tf.train.Supervisor.PrepareSession(master='', config=None, wait_for_checkpoint=False, max_wait_secs=7200, start_standard_services=True)` {#Supervisor.PrepareSession} - -Make sure the model is ready to be used. - -Create a session on 'master', recovering or initializing the model as -needed, or wait for a session to be ready. If running as the chief -and `start_standard_service` is set to True, also call the session -manager to start the standard services. - -##### Args: - - -* `master`: name of the TensorFlow master to use. See the `tf.Session` - constructor for how this is interpreted. -* `config`: Optional ConfigProto proto used to configure the session, - which is passed as-is to create the session. -* `wait_for_checkpoint`: Whether we should wait for the availability of a - checkpoint before creating Session. Defaults to False. -* `max_wait_secs`: Maximum time to wait for the session to become available. -* `start_standard_services`: Whether to start the standard services and the - queue runners. - -##### Returns: - - A Session object that can be used to drive the model. - - -- - - - -#### `tf.train.Supervisor.RequestStop(ex=None)` {#Supervisor.RequestStop} - -Request that the coordinator stop the threads. - -See `Coordinator.request_stop()`. - -##### Args: - - -* `ex`: Optional `Exception`, or Python `exc_info` tuple as returned by - `sys.exc_info()`. If this is the first call to `request_stop()` the - corresponding exception is recorded and re-raised from `join()`. - - -- - - - -#### `tf.train.Supervisor.ShouldStop()` {#Supervisor.ShouldStop} - -Check if the coordinator was told to stop. - -See `Coordinator.should_stop()`. - -##### Returns: - - True if the coordinator was told to stop, False otherwise. - - -- - - - -#### `tf.train.Supervisor.StartQueueRunners(sess, queue_runners=None)` {#Supervisor.StartQueueRunners} - -Start threads for `QueueRunners`. - -Note that the queue runners collected in the graph key `QUEUE_RUNNERS` -are already started automatically when you create a session with the -supervisor, so unless you have non-collected queue runners to start -you do not need to call this explicitely. - -##### Args: - - -* `sess`: A `Session`. -* `queue_runners`: A list of `QueueRunners`. If not specified, we'll use the - list of queue runners gathered in the graph under the key - `GraphKeys.QUEUE_RUNNERS`. - -##### Returns: - - The list of threads started for the `QueueRunners`. - - -- - - - -#### `tf.train.Supervisor.StartStandardServices(sess)` {#Supervisor.StartStandardServices} - -Start the standard services for 'sess'. - -This starts services in the background. The services started depend -on the parameters to the constructor and may include: - - - A Summary thread computing summaries every save_summaries_secs. - - A Checkpoint thread saving the model every save_model_secs. - - A StepCounter thread measure step time. - -##### Args: - - -* `sess`: A Session. - -##### Returns: - - A list of threads that are running the standard services. You can use - the Supervisor's Coordinator to join these threads with: - sv.coord.Join() - -##### Raises: - - -* `RuntimeError`: If called with a non-chief Supervisor. -* `ValueError`: If not `logdir` was passed to the constructor as the - services need a log directory. - - -- - - - -#### `tf.train.Supervisor.Stop(threads=None, close_summary_writer=True)` {#Supervisor.Stop} - -Stop the services and the coordinator. - -This does not close the session. - -##### Args: - - -* `threads`: Optional list of threads to join with the coordinator. If - `None`, defaults to the threads running the standard services, the - threads started for `QueueRunners`, and the threads started by the - `loop()` method. To wait on additional threads, pass the - list in this parameter. -* `close_summary_writer`: Whether to close the `summary_writer`. Defaults to - `True` if the summary writer was created by the supervisor, `False` - otherwise. - - -- - - - -#### `tf.train.Supervisor.StopOnException()` {#Supervisor.StopOnException} - -Context handler to stop the supervisor when an exception is raised. - -See `Coordinator.stop_on_exception()`. - -##### Returns: - - A context handler. - - -- - - - -#### `tf.train.Supervisor.SummaryComputed(sess, summary, global_step=None)` {#Supervisor.SummaryComputed} - -Indicate that a summary was computed. - -##### Args: - - -* `sess`: A `Session` object. -* `summary`: A Summary proto, or a string holding a serialized summary proto. -* `global_step`: Int. global step this summary is associated with. If `None`, - it will try to fetch the current step. - -##### Raises: - - -* `TypeError`: if 'summary' is not a Summary proto or a string. -* `RuntimeError`: if the Supervisor was created without a `logdir`. - - -- - - - -#### `tf.train.Supervisor.WaitForStop()` {#Supervisor.WaitForStop} - -Block waiting for the coordinator to stop. - - -- - - - -#### `tf.train.Supervisor.coord` {#Supervisor.coord} - -Return the Coordinator used by the Supervisor. - -The Coordinator can be useful if you want to run multiple threads -during your training. - -##### Returns: - - A Coordinator object. - - -- - - - -#### `tf.train.Supervisor.global_step` {#Supervisor.global_step} - -Return the global_step Tensor used by the supervisor. - -##### Returns: - - An integer Tensor for the global_step. - - -- - - - -#### `tf.train.Supervisor.init_feed_dict` {#Supervisor.init_feed_dict} - -Return the feed dictionary used when evaluating the `init_op`. - -##### Returns: - - A feed dictionary or `None`. - - -- - - - -#### `tf.train.Supervisor.init_op` {#Supervisor.init_op} - -Return the Init Op used by the supervisor. - -##### Returns: - - An Op or `None`. - - -- - - - -#### `tf.train.Supervisor.is_chief` {#Supervisor.is_chief} - -Return True if this is a chief supervisor. - -##### Returns: - - A bool. - - -- - - - -#### `tf.train.Supervisor.loop(timer_interval_secs, target, args=None, kwargs=None)` {#Supervisor.loop} - -Start a LooperThread that calls a function periodically. - -If `timer_interval_secs` is None the thread calls `target(*args, **kwargs)` -repeatedly. Otherwise it calls it every `timer_interval_secs` -seconds. The thread terminates when a stop is requested. - -The started thread is added to the list of threads managed by the supervisor -so it does not need to be passed to the `stop()` method. - -##### Args: - - -* `timer_interval_secs`: Number. Time boundaries at which to call `target`. -* `target`: A callable object. -* `args`: Optional arguments to pass to `target` when calling it. -* `kwargs`: Optional keyword arguments to pass to `target` when calling it. - -##### Returns: - - The started thread. - - -- - - - -#### `tf.train.Supervisor.ready_op` {#Supervisor.ready_op} - -Return the Ready Op used by the supervisor. - -##### Returns: - - An Op or `None`. - - -- - - - -#### `tf.train.Supervisor.save_model_secs` {#Supervisor.save_model_secs} - -Return the delay between checkpoints. - -##### Returns: - - A timestamp. - - -- - - - -#### `tf.train.Supervisor.save_path` {#Supervisor.save_path} - -Return the save path used by the supervisor. - -##### Returns: - - A string. - - -- - - - -#### `tf.train.Supervisor.save_summaries_secs` {#Supervisor.save_summaries_secs} - -Return the delay between summary computations. - -##### Returns: - - A timestamp. - - -- - - - -#### `tf.train.Supervisor.saver` {#Supervisor.saver} - -Return the Saver used by the supervisor. - -##### Returns: - - A Saver object. - - -- - - - -#### `tf.train.Supervisor.session_manager` {#Supervisor.session_manager} - -Return the SessionManager used by the Supervisor. - -##### Returns: - - A SessionManager object. - - -- - - - -#### `tf.train.Supervisor.summary_op` {#Supervisor.summary_op} - -Return the Summary Tensor used by the chief supervisor. - -##### Returns: - - A string Tensor for the summary or `None`. - - -- - - - -#### `tf.train.Supervisor.summary_writer` {#Supervisor.summary_writer} - -Return the SummaryWriter used by the chief supervisor. - -##### Returns: - - A SummaryWriter. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.generate_checkpoint_state_proto.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.generate_checkpoint_state_proto.md deleted file mode 100644 index 7405b289e3..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.generate_checkpoint_state_proto.md +++ /dev/null @@ -1,20 +0,0 @@ -### `tf.train.generate_checkpoint_state_proto(save_dir, model_checkpoint_path, all_model_checkpoint_paths=None)` {#generate_checkpoint_state_proto} - -Generates a checkpoint state proto. - -##### Args: - - -* `save_dir`: Directory where the model was saved. -* `model_checkpoint_path`: The checkpoint file. -* `all_model_checkpoint_paths`: List of strings. Paths to all not-yet-deleted - checkpoints, sorted from oldest to newest. If this is a non-empty list, - the last element must be equal to model_checkpoint_path. These paths - are also saved in the CheckpointState proto. - -##### Returns: - - CheckpointState proto with model_checkpoint_path and - all_model_checkpoint_paths updated to either absolute paths or - relative paths to the current save_dir. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.input_producer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.input_producer.md new file mode 100644 index 0000000000..41a417aac3 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.input_producer.md @@ -0,0 +1,38 @@ +### `tf.train.input_producer(input_tensor, element_shape=None, num_epochs=None, shuffle=True, seed=None, capacity=32, shared_name=None, summary_name=None, name=None)` {#input_producer} + +Output the rows of `input_tensor` to a queue for an input pipeline. + +##### Args: + + +* `input_tensor`: A tensor with the rows to produce. Must be at + one-dimensional. Must either have a fully-defined shape, or + `element_shape` must be defined. +* `element_shape`: (Optional.) A `TensorShape` representing the shape of a + row of `input_tensor`, if it cannot be inferred. +* `num_epochs`: (Optional.) An integer. If specified `input_producer` produces + each row of `input_tensor` `num_epochs` times before generating an + `OutOfRange` error. If not specified, `input_producer` can cycle through + the rows of `input_tensor` an unlimited number of times. +* `shuffle`: (Optional.) A boolean. If true, the rows are randomly shuffled + within each eopch. +* `seed`: (Optional.) An integer. The seed to use if `shuffle` is true. +* `capacity`: (Optional.) The capacity of the queue to be used for buffering + the input. +* `shared_name`: (Optional.) If set, this queue will be shared under the given + name across multiple sessions. +* `summary_name`: (Optional.) If set, a scalar summary for the current queue + size will be generated, using this name as part of the tag. +* `name`: (Optional.) A name for queue. + +##### Returns: + + A queue with the output rows. A `QueueRunner` for the queue is + added to the current `QUEUE_RUNNER` collection of the current + graph. + +##### Raises: + + +* `ValueError`: If the shape of the input cannot be inferred from the arguments. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.limit_epochs.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.limit_epochs.md deleted file mode 100644 index ba3e710df4..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.limit_epochs.md +++ /dev/null @@ -1,21 +0,0 @@ -### `tf.train.limit_epochs(tensor, num_epochs=None, name=None)` {#limit_epochs} - -Returns tensor `num_epochs` times and then raises an `OutOfRange` error. - -##### Args: - - -* `tensor`: Any `Tensor`. -* `num_epochs`: A positive integer (optional). If specified, limits the number - of steps the output tensor may be evaluated. -* `name`: A name for the operations (optional). - -##### Returns: - - tensor or `OutOfRange`. - -##### Raises: - - -* `ValueError`: if `num_epochs` is invalid. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.shuffle_batch.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.shuffle_batch.md new file mode 100644 index 0000000000..bf2591801b --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.shuffle_batch.md @@ -0,0 +1,74 @@ +### `tf.train.shuffle_batch(tensors, batch_size, capacity, min_after_dequeue, num_threads=1, seed=None, enqueue_many=False, shapes=None, shared_name=None, name=None)` {#shuffle_batch} + +Creates batches by randomly shuffling tensors. + +This function adds the following to the current `Graph`: + +* A shuffling queue into which tensors from `tensors` are enqueued. +* A `dequeue_many` operation to create batches from the queue. +* A `QueueRunner` to `QUEUE_RUNNER` collection, to enqueue the tensors + from `tensors`. + +If `enqueue_many` is `False`, `tensors` is assumed to represent a +single example. An input tensor with shape `[x, y, z]` will be output +as a tensor with shape `[batch_size, x, y, z]`. + +If `enqueue_many` is `True`, `tensors` is assumed to represent a +batch of examples, where the first dimension is indexed by example, +and all members of `tensors` should have the same size in the +first dimension. If an input tensor has shape `[*, x, y, z]`, the +output will have shape `[batch_size, x, y, z]`. + +The `capacity` argument controls the how long the prefetching is allowed to +grow the queues. + +The returned operation is a dequeue operation and will throw +`tf.errors.OutOfRangeError` if the input queue is exhausted. If this +operation is feeding another input queue, its queue runner will catch +this exception, however, if this operation is used in your main thread +you are responsible for catching this yourself. + +For example: + +```python +# Creates batches of 32 images and 32 labels. +image_batch, label_batch = tf.train.shuffle_batch( + [single_image, single_label], + batch_size=32, + num_threads=4, + capacity=50000, + min_after_dequeue=10000) +``` + +*N.B.:* You must ensure that either (i) the `shapes` argument is +passed, or (ii) all of the tensors in `tensors` must have +fully-defined shapes. `ValueError` will be raised if neither of +these conditions holds. + +##### Args: + + +* `tensors`: The list or dictionary of tensors to enqueue. +* `batch_size`: The new batch size pulled from the queue. +* `capacity`: An integer. The maximum number of elements in the queue. +* `min_after_dequeue`: Minimum number elements in the queue after a + dequeue, used to ensure a level of mixing of elements. +* `num_threads`: The number of threads enqueuing `tensor_list`. +* `seed`: Seed for the random shuffling within the queue. +* `enqueue_many`: Whether each tensor in `tensor_list` is a single example. +* `shapes`: (Optional) The shapes for each example. Defaults to the + inferred shapes for `tensor_list`. +* `shared_name`: (Optional) If set, this queue will be shared under the given + name across multiple sessions. +* `name`: (Optional) A name for the operations. + +##### Returns: + + A list or dictionary of tensors with the types as `tensors`. + +##### Raises: + + +* `ValueError`: If the `shapes` are not specified, and cannot be + inferred from the elements of `tensors`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.summary_iterator.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.summary_iterator.md new file mode 100644 index 0000000000..5702571441 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.summary_iterator.md @@ -0,0 +1,42 @@ +### `tf.train.summary_iterator(path)` {#summary_iterator} + +An iterator for reading `Event` protocol buffers from an event file. + +You can use this function to read events written to an event file. It returns +a Python iterator that yields `Event` protocol buffers. + +Example: Print the contents of an events file. + +```python +for e in tf.train.summary_iterator(path to events file): + print(e) +``` + +Example: Print selected summary values. + +```python +# This example supposes that the events file contains summaries with a +# summary value tag 'loss'. These could have been added by calling +# `add_summary()`, passing the output of a scalar summary op created with +# with: `tf.scalar_summary(['loss'], loss_tensor)`. +for e in tf.train.summary_iterator(path to events file): + for v in e.summary.value: + if v.tag == 'loss': + print(v.simple_value) +``` + +See the protocol buffer definitions of +[Event](https://www.tensorflow.org/code/tensorflow/core/util/event.proto) +and +[Summary](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto) +for more information about their attributes. + +##### Args: + + +* `path`: The path to an event file created by a `SummaryWriter`. + +##### Yields: + + `Event` protocol buffers. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.trainable_variables.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.trainable_variables.md new file mode 100644 index 0000000000..894d64a2b4 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.trainable_variables.md @@ -0,0 +1,13 @@ +### `tf.trainable_variables()` {#trainable_variables} + +Returns all variables created with `trainable=True`. + +When passed `trainable=True`, the `Variable()` constructor automatically +adds new variables to the graph collection +`GraphKeys.TRAINABLE_VARIABLES`. This convenience function returns the +contents of that collection. + +##### Returns: + + A list of Variable objects. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.transpose.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.transpose.md deleted file mode 100644 index c6b76c7824..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.transpose.md +++ /dev/null @@ -1,49 +0,0 @@ -### `tf.transpose(a, perm=None, name='transpose')` {#transpose} - -Transposes `a`. Permutes the dimensions according to `perm`. - -The returned tensor's dimension i will correspond to the input dimension -`perm[i]`. If `perm` is not given, it is set to (n-1...0), where n is -the rank of the input tensor. Hence by default, this operation performs a -regular matrix transpose on 2-D input Tensors. - -For example: - -```python -# 'x' is [[1 2 3] -# [4 5 6]] -tf.transpose(x) ==> [[1 4] - [2 5] - [3 6]] - -# Equivalently -tf.transpose(x, perm=[1, 0]) ==> [[1 4] - [2 5] - [3 6]] - -# 'perm' is more useful for n-dimensional tensors, for n > 2 -# 'x' is [[[1 2 3] -# [4 5 6]] -# [[7 8 9] -# [10 11 12]]] -# Take the transpose of the matrices in dimension-0 -tf.transpose(x, perm=[0, 2, 1]) ==> [[[1 4] - [2 5] - [3 6]] - - [[7 10] - [8 11] - [9 12]]] -``` - -##### Args: - - -* `a`: A `Tensor`. -* `perm`: A permutation of the dimensions of `a`. -* `name`: A name for the operation (optional). - -##### Returns: - - A transposed `Tensor`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.variable_axis_size_partitioner.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.variable_axis_size_partitioner.md deleted file mode 100644 index 5d8822e83c..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.variable_axis_size_partitioner.md +++ /dev/null @@ -1,37 +0,0 @@ -### `tf.variable_axis_size_partitioner(max_shard_bytes, axis=0, bytes_per_string_element=16, max_shards=None)` {#variable_axis_size_partitioner} - -Get a partitioner for VariableScope to keep shards below `max_shard_bytes`. - -This partitioner will shard a Variable along one axis, attempting to keep -the maximum shard size below `max_shard_bytes`. In practice, this is not -always possible when sharding along only one axis. When this happens, -this axis is sharded as much as possible (i.e., every dimension becomes -a separate shard). - -If the partitioner hits the `max_shards` limit, then each shard may end up -larger than `max_shard_bytes`. By default `max_shards` equals `None` and no -limit on the number of shards is enforced. - -One reasonable value for `max_shard_bytes` is `(64 << 20) - 1`, or almost -`64MB`, to keep below the protobuf byte limit. - -##### Args: - - -* `max_shard_bytes`: The maximum size any given shard is allowed to be. -* `axis`: The axis to partition along. Default: outermost axis. -* `bytes_per_string_element`: If the `Variable` is of type string, this provides - an estimate of how large each scalar in the `Variable` is. -* `max_shards`: The maximum number of shards in int created taking precedence - over `max_shard_bytes`. - -##### Returns: - - A partition function usable as the `partitioner` argument to - `variable_scope`, `get_variable`, and `get_partitioned_variable_list`. - -##### Raises: - - -* `ValueError`: If any of the byte counts are non-positive. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.Dimension.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.Dimension.md deleted file mode 100644 index f149b6cb65..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.Dimension.md +++ /dev/null @@ -1,83 +0,0 @@ -Represents the value of one dimension in a TensorShape. -- - - - -#### `tf.Dimension.__init__(value)` {#Dimension.__init__} - -Creates a new Dimension with the given value. - - -- - - - -#### `tf.Dimension.assert_is_compatible_with(other)` {#Dimension.assert_is_compatible_with} - -Raises an exception if `other` is not compatible with this Dimension. - -##### Args: - - -* `other`: Another Dimension. - -##### Raises: - - -* `ValueError`: If `self` and `other` are not compatible (see - is_compatible_with). - - -- - - - -#### `tf.Dimension.is_compatible_with(other)` {#Dimension.is_compatible_with} - -Returns true if `other` is compatible with this Dimension. - -Two known Dimensions are compatible if they have the same value. -An unknown Dimension is compatible with all other Dimensions. - -##### Args: - - -* `other`: Another Dimension. - -##### Returns: - - True if this Dimension and `other` are compatible. - - -- - - - -#### `tf.Dimension.merge_with(other)` {#Dimension.merge_with} - -Returns a Dimension that combines the information in `self` and `other`. - -Dimensions are combined as follows: - - Dimension(n) .merge_with(Dimension(n)) == Dimension(n) - Dimension(n) .merge_with(Dimension(None)) == Dimension(n) - Dimension(None).merge_with(Dimension(n)) == Dimension(n) - Dimension(None).merge_with(Dimension(None)) == Dimension(None) - Dimension(n) .merge_with(Dimension(m)) raises ValueError for n != m - -##### Args: - - -* `other`: Another Dimension. - -##### Returns: - - A Dimension containing the combined information of `self` and - `other`. - -##### Raises: - - -* `ValueError`: If `self` and `other` are not compatible (see - is_compatible_with). - - -- - - - -#### `tf.Dimension.value` {#Dimension.value} - -The value of this dimension, or None if it is unknown. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.FixedLenSequenceFeature.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.FixedLenSequenceFeature.md deleted file mode 100644 index 607b81a9bf..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.FixedLenSequenceFeature.md +++ /dev/null @@ -1,31 +0,0 @@ -Configuration for a dense input feature in a sequence item. - -To treat a sparse input as dense, provide `allow_missing=True`; otherwise, -the parse functions will fail on any examples missing this feature. - -Fields: - shape: Shape of input data. - dtype: Data type of input. - allow_missing: Whether to allow this feature to be missing from a feature - list item. -- - - - -#### `tf.FixedLenSequenceFeature.allow_missing` {#FixedLenSequenceFeature.allow_missing} - -Alias for field number 2 - - -- - - - -#### `tf.FixedLenSequenceFeature.dtype` {#FixedLenSequenceFeature.dtype} - -Alias for field number 1 - - -- - - - -#### `tf.FixedLenSequenceFeature.shape` {#FixedLenSequenceFeature.shape} - -Alias for field number 0 - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.Graph.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.Graph.md deleted file mode 100644 index 762a117664..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.Graph.md +++ /dev/null @@ -1,783 +0,0 @@ -A TensorFlow computation, represented as a dataflow graph. - -A `Graph` contains a set of -[`Operation`](../../api_docs/python/framework.md#Operation) objects, -which represent units of computation; and -[`Tensor`](../../api_docs/python/framework.md#Tensor) objects, which represent -the units of data that flow between operations. - -A default `Graph` is always registered, and accessible by calling -[`tf.get_default_graph()`](../../api_docs/python/framework.md#get_default_graph). -To add an operation to the default graph, simply call one of the functions -that defines a new `Operation`: - -``` -c = tf.constant(4.0) -assert c.graph is tf.get_default_graph() -``` - -Another typical usage involves the -[`Graph.as_default()`](../../api_docs/python/framework.md#Graph.as_default) -context manager, which overrides the current default graph for the -lifetime of the context: - -```python -g = tf.Graph() -with g.as_default(): - # Define operations and tensors in `g`. - c = tf.constant(30.0) - assert c.graph is g -``` - -Important note: This class *is not* thread-safe for graph construction. All -operations should be created from a single thread, or external -synchronization must be provided. Unless otherwise specified, all methods -are not thread-safe. - -- - - - -#### `tf.Graph.__init__()` {#Graph.__init__} - -Creates a new, empty Graph. - - -- - - - -#### `tf.Graph.as_default()` {#Graph.as_default} - -Returns a context manager that makes this `Graph` the default graph. - -This method should be used if you want to create multiple graphs -in the same process. For convenience, a global default graph is -provided, and all ops will be added to this graph if you do not -create a new graph explicitly. Use this method with the `with` keyword -to specify that ops created within the scope of a block should be -added to this graph. - -The default graph is a property of the current thread. If you -create a new thread, and wish to use the default graph in that -thread, you must explicitly add a `with g.as_default():` in that -thread's function. - -The following code examples are equivalent: - -```python -# 1. Using Graph.as_default(): -g = tf.Graph() -with g.as_default(): - c = tf.constant(5.0) - assert c.graph is g - -# 2. Constructing and making default: -with tf.Graph().as_default() as g: - c = tf.constant(5.0) - assert c.graph is g -``` - -##### Returns: - - A context manager for using this graph as the default graph. - - -- - - - -#### `tf.Graph.as_graph_def(from_version=None, add_shapes=False)` {#Graph.as_graph_def} - -Returns a serialized `GraphDef` representation of this graph. - -The serialized `GraphDef` can be imported into another `Graph` -(using [`import_graph_def()`](#import_graph_def)) or used with the -[C++ Session API](../../api_docs/cc/index.md). - -This method is thread-safe. - -##### Args: - - -* `from_version`: Optional. If this is set, returns a `GraphDef` - containing only the nodes that were added to this graph since - its `version` property had the given value. -* `add_shapes`: If true, adds an "_output_shapes" list attr to each - node with the inferred shapes of each of its outputs. - -##### Returns: - - A [`GraphDef`](https://www.tensorflow.org/code/tensorflow/core/framework/graph.proto) - protocol buffer. - -##### Raises: - - -* `ValueError`: If the `graph_def` would be too large. - - -- - - - -#### `tf.Graph.finalize()` {#Graph.finalize} - -Finalizes this graph, making it read-only. - -After calling `g.finalize()`, no new operations can be added to -`g`. This method is used to ensure that no operations are added -to a graph when it is shared between multiple threads, for example -when using a [`QueueRunner`](../../api_docs/python/train.md#QueueRunner). - - -- - - - -#### `tf.Graph.finalized` {#Graph.finalized} - -True if this graph has been finalized. - - - -- - - - -#### `tf.Graph.control_dependencies(control_inputs)` {#Graph.control_dependencies} - -Returns a context manager that specifies control dependencies. - -Use with the `with` keyword to specify that all operations constructed -within the context should have control dependencies on -`control_inputs`. For example: - -```python -with g.control_dependencies([a, b, c]): - # `d` and `e` will only run after `a`, `b`, and `c` have executed. - d = ... - e = ... -``` - -Multiple calls to `control_dependencies()` can be nested, and in -that case a new `Operation` will have control dependencies on the union -of `control_inputs` from all active contexts. - -```python -with g.control_dependencies([a, b]): - # Ops constructed here run after `a` and `b`. - with g.control_dependencies([c, d]): - # Ops constructed here run after `a`, `b`, `c`, and `d`. -``` - -You can pass None to clear the control dependencies: - -```python -with g.control_dependencies([a, b]): - # Ops constructed here run after `a` and `b`. - with g.control_dependencies(None): - # Ops constructed here run normally, not waiting for either `a` or `b`. - with g.control_dependencies([c, d]): - # Ops constructed here run after `c` and `d`, also not waiting - # for either `a` or `b`. -``` - -*N.B.* The control dependencies context applies *only* to ops that -are constructed within the context. Merely using an op or tensor -in the context does not add a control dependency. The following -example illustrates this point: - -```python -# WRONG -def my_func(pred, tensor): - t = tf.matmul(tensor, tensor) - with tf.control_dependencies([pred]): - # The matmul op is created outside the context, so no control - # dependency will be added. - return t - -# RIGHT -def my_func(pred, tensor): - with tf.control_dependencies([pred]): - # The matmul op is created in the context, so a control dependency - # will be added. - return tf.matmul(tensor, tensor) -``` - -##### Args: - - -* `control_inputs`: A list of `Operation` or `Tensor` objects which - must be executed or computed before running the operations - defined in the context. Can also be `None` to clear the control - dependencies. - -##### Returns: - - A context manager that specifies control dependencies for all - operations constructed within the context. - -##### Raises: - - -* `TypeError`: If `control_inputs` is not a list of `Operation` or - `Tensor` objects. - - -- - - - -#### `tf.Graph.device(device_name_or_function)` {#Graph.device} - -Returns a context manager that specifies the default device to use. - -The `device_name_or_function` argument may either be a device name -string, a device function, or None: - -* If it is a device name string, all operations constructed in - this context will be assigned to the device with that name, unless - overridden by a nested `device()` context. -* If it is a function, it will be treated as a function from - Operation objects to device name strings, and invoked each time - a new Operation is created. The Operation will be assigned to - the device with the returned name. -* If it is None, all `device()` invocations from the enclosing context - will be ignored. - -For information about the valid syntax of device name strings, see -the documentation in -[`DeviceNameUtils`](https://www.tensorflow.org/code/tensorflow/core/util/device_name_utils.h). - -For example: - -```python -with g.device('/gpu:0'): - # All operations constructed in this context will be placed - # on GPU 0. - with g.device(None): - # All operations constructed in this context will have no - # assigned device. - -# Defines a function from `Operation` to device string. -def matmul_on_gpu(n): - if n.type == "MatMul": - return "/gpu:0" - else: - return "/cpu:0" - -with g.device(matmul_on_gpu): - # All operations of type "MatMul" constructed in this context - # will be placed on GPU 0; all other operations will be placed - # on CPU 0. -``` - -**N.B.** The device scope may be overridden by op wrappers or -other library code. For example, a variable assignment op -`v.assign()` must be colocated with the `tf.Variable` `v`, and -incompatible device scopes will be ignored. - -##### Args: - - -* `device_name_or_function`: The device name or function to use in - the context. - -##### Returns: - - A context manager that specifies the default device to use for newly - created ops. - - -- - - - -#### `tf.Graph.name_scope(name)` {#Graph.name_scope} - -Returns a context manager that creates hierarchical names for operations. - -A graph maintains a stack of name scopes. A `with name_scope(...):` -statement pushes a new name onto the stack for the lifetime of the context. - -The `name` argument will be interpreted as follows: - -* A string (not ending with '/') will create a new name scope, in which - `name` is appended to the prefix of all operations created in the - context. If `name` has been used before, it will be made unique by - calling `self.unique_name(name)`. -* A scope previously captured from a `with g.name_scope(...) as - scope:` statement will be treated as an "absolute" name scope, which - makes it possible to re-enter existing scopes. -* A value of `None` or the empty string will reset the current name scope - to the top-level (empty) name scope. - -For example: - -```python -with tf.Graph().as_default() as g: - c = tf.constant(5.0, name="c") - assert c.op.name == "c" - c_1 = tf.constant(6.0, name="c") - assert c_1.op.name == "c_1" - - # Creates a scope called "nested" - with g.name_scope("nested") as scope: - nested_c = tf.constant(10.0, name="c") - assert nested_c.op.name == "nested/c" - - # Creates a nested scope called "inner". - with g.name_scope("inner"): - nested_inner_c = tf.constant(20.0, name="c") - assert nested_inner_c.op.name == "nested/inner/c" - - # Create a nested scope called "inner_1". - with g.name_scope("inner"): - nested_inner_1_c = tf.constant(30.0, name="c") - assert nested_inner_1_c.op.name == "nested/inner_1/c" - - # Treats `scope` as an absolute name scope, and - # switches to the "nested/" scope. - with g.name_scope(scope): - nested_d = tf.constant(40.0, name="d") - assert nested_d.op.name == "nested/d" - - with g.name_scope(""): - e = tf.constant(50.0, name="e") - assert e.op.name == "e" -``` - -The name of the scope itself can be captured by `with -g.name_scope(...) as scope:`, which stores the name of the scope -in the variable `scope`. This value can be used to name an -operation that represents the overall result of executing the ops -in a scope. For example: - -```python -inputs = tf.constant(...) -with g.name_scope('my_layer') as scope: - weights = tf.Variable(..., name="weights") - biases = tf.Variable(..., name="biases") - affine = tf.matmul(inputs, weights) + biases - output = tf.nn.relu(affine, name=scope) -``` - -##### Args: - - -* `name`: A name for the scope. - -##### Returns: - - A context manager that installs `name` as a new name scope. - - - -A `Graph` instance supports an arbitrary number of "collections" -that are identified by name. For convenience when building a large -graph, collections can store groups of related objects: for -example, the `tf.Variable` uses a collection (named -[`tf.GraphKeys.VARIABLES`](../../api_docs/python/framework.md#GraphKeys)) for -all variables that are created during the construction of a graph. The caller -may define additional collections by specifying a new name. - -- - - - -#### `tf.Graph.add_to_collection(name, value)` {#Graph.add_to_collection} - -Stores `value` in the collection with the given `name`. - -Note that collections are not sets, so it is possible to add a value to -a collection several times. - -##### Args: - - -* `name`: The key for the collection. The `GraphKeys` class - contains many standard names for collections. -* `value`: The value to add to the collection. - - -- - - - -#### `tf.Graph.add_to_collections(names, value)` {#Graph.add_to_collections} - -Stores `value` in the collections given by `names`. - -Note that collections are not sets, so it is possible to add a value to -a collection several times. This function makes sure that duplicates in -`names` are ignored, but it will not check for pre-existing membership of -`value` in any of the collections in `names`. - -`names` can be any iterable, but if `names` is a string, it is treated as a -single collection name. - -##### Args: - - -* `names`: The keys for the collections to add to. The `GraphKeys` class - contains many standard names for collections. -* `value`: The value to add to the collections. - - -- - - - -#### `tf.Graph.get_collection(name, scope=None)` {#Graph.get_collection} - -Returns a list of values in the collection with the given `name`. - -This is different from `get_collection_ref()` which always returns the -actual collection list if it exists in that it returns a new list each time -it is called. - -##### Args: - - -* `name`: The key for the collection. For example, the `GraphKeys` class - contains many standard names for collections. -* `scope`: (Optional.) If supplied, the resulting list is filtered to include - only items whose `name` attribute matches using `re.match`. Items - without a `name` attribute are never returned if a scope is supplied and - the choice or `re.match` means that a `scope` without special tokens - filters by prefix. - -##### Returns: - - The list of values in the collection with the given `name`, or - an empty list if no value has been added to that collection. The - list contains the values in the order under which they were - collected. - - -- - - - -#### `tf.Graph.get_collection_ref(name)` {#Graph.get_collection_ref} - -Returns a list of values in the collection with the given `name`. - -If the collection exists, this returns the list itself, which can -be modified in place to change the collection. If the collection does -not exist, it is created as an empty list and the list is returned. - -This is different from `get_collection()` which always returns a copy of -the collection list if it exists and never creates an empty collection. - -##### Args: - - -* `name`: The key for the collection. For example, the `GraphKeys` class - contains many standard names for collections. - -##### Returns: - - The list of values in the collection with the given `name`, or an empty - list if no value has been added to that collection. - - - -- - - - -#### `tf.Graph.as_graph_element(obj, allow_tensor=True, allow_operation=True)` {#Graph.as_graph_element} - -Returns the object referred to by `obj`, as an `Operation` or `Tensor`. - -This function validates that `obj` represents an element of this -graph, and gives an informative error message if it is not. - -This function is the canonical way to get/validate an object of -one of the allowed types from an external argument reference in the -Session API. - -This method may be called concurrently from multiple threads. - -##### Args: - - -* `obj`: A `Tensor`, an `Operation`, or the name of a tensor or operation. - Can also be any object with an `_as_graph_element()` method that returns - a value of one of these types. -* `allow_tensor`: If true, `obj` may refer to a `Tensor`. -* `allow_operation`: If true, `obj` may refer to an `Operation`. - -##### Returns: - - The `Tensor` or `Operation` in the Graph corresponding to `obj`. - -##### Raises: - - -* `TypeError`: If `obj` is not a type we support attempting to convert - to types. -* `ValueError`: If `obj` is of an appropriate type but invalid. For - example, an invalid string. -* `KeyError`: If `obj` is not an object in the graph. - - -- - - - -#### `tf.Graph.get_operation_by_name(name)` {#Graph.get_operation_by_name} - -Returns the `Operation` with the given `name`. - -This method may be called concurrently from multiple threads. - -##### Args: - - -* `name`: The name of the `Operation` to return. - -##### Returns: - - The `Operation` with the given `name`. - -##### Raises: - - -* `TypeError`: If `name` is not a string. -* `KeyError`: If `name` does not correspond to an operation in this graph. - - -- - - - -#### `tf.Graph.get_tensor_by_name(name)` {#Graph.get_tensor_by_name} - -Returns the `Tensor` with the given `name`. - -This method may be called concurrently from multiple threads. - -##### Args: - - -* `name`: The name of the `Tensor` to return. - -##### Returns: - - The `Tensor` with the given `name`. - -##### Raises: - - -* `TypeError`: If `name` is not a string. -* `KeyError`: If `name` does not correspond to a tensor in this graph. - - -- - - - -#### `tf.Graph.get_operations()` {#Graph.get_operations} - -Return the list of operations in the graph. - -You can modify the operations in place, but modifications -to the list such as inserts/delete have no effect on the -list of operations known to the graph. - -This method may be called concurrently from multiple threads. - -##### Returns: - - A list of Operations. - - - -- - - - -#### `tf.Graph.seed` {#Graph.seed} - -The graph-level random seed of this graph. - - -- - - - -#### `tf.Graph.unique_name(name, mark_as_used=True)` {#Graph.unique_name} - -Return a unique operation name for `name`. - -Note: You rarely need to call `unique_name()` directly. Most of -the time you just need to create `with g.name_scope()` blocks to -generate structured names. - -`unique_name` is used to generate structured names, separated by -`"/"`, to help identify operations when debugging a graph. -Operation names are displayed in error messages reported by the -TensorFlow runtime, and in various visualization tools such as -TensorBoard. - -If `mark_as_used` is set to `True`, which is the default, a new -unique name is created and marked as in use. If it's set to `False`, -the unique name is returned without actually being marked as used. -This is useful when the caller simply wants to know what the name -to be created will be. - -##### Args: - - -* `name`: The name for an operation. -* `mark_as_used`: Whether to mark this name as being used. - -##### Returns: - - A string to be passed to `create_op()` that will be used - to name the operation being created. - - -- - - - -#### `tf.Graph.version` {#Graph.version} - -Returns a version number that increases as ops are added to the graph. - -Note that this is unrelated to the -[GraphDef version](#Graph.graph_def_version). - - -- - - - -#### `tf.Graph.graph_def_versions` {#Graph.graph_def_versions} - -The GraphDef version information of this graph. - -For details on the meaning of each version, see [`GraphDef`] -(https://www.tensorflow.org/code/tensorflow/core/framework/graph.proto). - -##### Returns: - - A `VersionDef`. - - - -- - - - -#### `tf.Graph.create_op(op_type, inputs, dtypes, input_types=None, name=None, attrs=None, op_def=None, compute_shapes=True, compute_device=True)` {#Graph.create_op} - -Creates an `Operation` in this graph. - -This is a low-level interface for creating an `Operation`. Most -programs will not call this method directly, and instead use the -Python op constructors, such as `tf.constant()`, which add ops to -the default graph. - -##### Args: - - -* `op_type`: The `Operation` type to create. This corresponds to the - `OpDef.name` field for the proto that defines the operation. -* `inputs`: A list of `Tensor` objects that will be inputs to the `Operation`. -* `dtypes`: A list of `DType` objects that will be the types of the tensors - that the operation produces. -* `input_types`: (Optional.) A list of `DType`s that will be the types of - the tensors that the operation consumes. By default, uses the base - `DType` of each input in `inputs`. Operations that expect - reference-typed inputs must specify `input_types` explicitly. -* `name`: (Optional.) A string name for the operation. If not specified, a - name is generated based on `op_type`. -* `attrs`: (Optional.) A dictionary where the key is the attribute name (a - string) and the value is the respective `attr` attribute of the - `NodeDef` proto that will represent the operation (an `AttrValue` - proto). -* `op_def`: (Optional.) The `OpDef` proto that describes the `op_type` that - the operation will have. -* `compute_shapes`: (Optional.) If True, shape inference will be performed - to compute the shapes of the outputs. -* `compute_device`: (Optional.) If True, device functions will be executed - to compute the device property of the Operation. - -##### Raises: - - -* `TypeError`: if any of the inputs is not a `Tensor`. -* `ValueError`: if colocation conflicts with existing device assignment. - -##### Returns: - - An `Operation` object. - - -- - - - -#### `tf.Graph.gradient_override_map(op_type_map)` {#Graph.gradient_override_map} - -EXPERIMENTAL: A context manager for overriding gradient functions. - -This context manager can be used to override the gradient function -that will be used for ops within the scope of the context. - -For example: - -```python -@tf.RegisterGradient("CustomSquare") -def _custom_square_grad(op, grad): - # ... - -with tf.Graph().as_default() as g: - c = tf.constant(5.0) - s_1 = tf.square(c) # Uses the default gradient for tf.square. - with g.gradient_override_map({"Square": "CustomSquare"}): - s_2 = tf.square(s_2) # Uses _custom_square_grad to compute the - # gradient of s_2. -``` - -##### Args: - - -* `op_type_map`: A dictionary mapping op type strings to alternative op - type strings. - -##### Returns: - - A context manager that sets the alternative op type to be used for one - or more ops created in that context. - -##### Raises: - - -* `TypeError`: If `op_type_map` is not a dictionary mapping strings to - strings. - - - -#### Other Methods -- - - - -#### `tf.Graph.colocate_with(op, ignore_existing=False)` {#Graph.colocate_with} - -Returns a context manager that specifies an op to colocate with. - -Note: this function is not for public use, only for internal libraries. - -For example: - -```python -a = tf.Variable([1.0]) -with g.colocate_with(a): - b = tf.constant(1.0) - c = tf.add(a, b) -``` - -`b` and `c` will always be colocated with `a`, no matter where `a` -is eventually placed. - -##### Args: - - -* `op`: The op to colocate all created ops with. -* `ignore_existing`: If true, only applies colocation of this op within - the context, rather than applying all colocation properties - on the stack. - -##### Raises: - - -* `ValueError`: if op is None. - -##### Yields: - - A context manager that specifies the op with which to colocate - newly created ops. - - -- - - - -#### `tf.Graph.get_all_collection_keys()` {#Graph.get_all_collection_keys} - -Returns a list of collections used in this graph. - - -- - - - -#### `tf.Graph.is_feedable(tensor)` {#Graph.is_feedable} - -Returns `True` if and only if `tensor` is feedable. - - -- - - - -#### `tf.Graph.prevent_feeding(tensor)` {#Graph.prevent_feeding} - -Marks the given `tensor` as unfeedable in this graph. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.GraphKeys.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.GraphKeys.md deleted file mode 100644 index 1d656f4018..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.GraphKeys.md +++ /dev/null @@ -1,36 +0,0 @@ -Standard names to use for graph collections. - -The standard library uses various well-known names to collect and -retrieve values associated with a graph. For example, the -`tf.Optimizer` subclasses default to optimizing the variables -collected under `tf.GraphKeys.TRAINABLE_VARIABLES` if none is -specified, but it is also possible to pass an explicit list of -variables. - -The following standard keys are defined: - -* `VARIABLES`: the `Variable` objects that comprise a model, and - must be saved and restored together. See - [`tf.all_variables()`](../../api_docs/python/state_ops.md#all_variables) - for more details. -* `TRAINABLE_VARIABLES`: the subset of `Variable` objects that will - be trained by an optimizer. See - [`tf.trainable_variables()`](../../api_docs/python/state_ops.md#trainable_variables) - for more details. -* `SUMMARIES`: the summary `Tensor` objects that have been created in the - graph. See - [`tf.merge_all_summaries()`](../../api_docs/python/train.md#merge_all_summaries) - for more details. -* `QUEUE_RUNNERS`: the `QueueRunner` objects that are used to - produce input for a computation. See - [`tf.start_queue_runners()`](../../api_docs/python/train.md#start_queue_runners) - for more details. -* `MOVING_AVERAGE_VARIABLES`: the subset of `Variable` objects that will also - keep moving averages. See - [`tf.moving_average_variables()`](../../api_docs/python/state_ops.md#moving_average_variables) - for more details. -* `REGULARIZATION_LOSSES`: regularization losses collected during graph - construction. -* `WEIGHTS`: weights inside neural network layers -* `BIASES`: biases inside neural network layers -* `ACTIVATIONS`: activations of neural network layers diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.OpError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.OpError.md new file mode 100644 index 0000000000..c23014ad17 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.OpError.md @@ -0,0 +1,62 @@ +A generic error that is raised when TensorFlow execution fails. + +Whenever possible, the session will raise a more specific subclass +of `OpError` from the `tf.errors` module. + +- - - + +#### `tf.OpError.op` {#OpError.op} + +The operation that failed, if known. + +*N.B.* If the failed op was synthesized at runtime, e.g. a `Send` +or `Recv` op, there will be no corresponding +[`Operation`](../../api_docs/python/framework.md#Operation) +object. In that case, this will return `None`, and you should +instead use the [`OpError.node_def`](#OpError.node_def) to +discover information about the op. + +##### Returns: + + The `Operation` that failed, or None. + + +- - - + +#### `tf.OpError.node_def` {#OpError.node_def} + +The `NodeDef` proto representing the op that failed. + + + +#### Other Methods +- - - + +#### `tf.OpError.__init__(node_def, op, message, error_code)` {#OpError.__init__} + +Creates a new `OpError` indicating that a particular op failed. + +##### Args: + + +* `node_def`: The `graph_pb2.NodeDef` proto representing the op that failed, + if known; otherwise None. +* `op`: The `ops.Operation` that failed, if known; otherwise None. +* `message`: The message string describing the failure. +* `error_code`: The `error_codes_pb2.Code` describing the error. + + +- - - + +#### `tf.OpError.error_code` {#OpError.error_code} + +The integer error code that describes the error. + + +- - - + +#### `tf.OpError.message` {#OpError.message} + +The error message that describes the error. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.Print.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.Print.md new file mode 100644 index 0000000000..b1ec7c1af0 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.Print.md @@ -0,0 +1,23 @@ +### `tf.Print(input_, data, message=None, first_n=None, summarize=None, name=None)` {#Print} + +Prints a list of tensors. + +This is an identity op with the side effect of printing `data` when +evaluating. + +##### Args: + + +* `input_`: A tensor passed through this op. +* `data`: A list of tensors to print out when op is evaluated. +* `message`: A string, prefix of the error message. +* `first_n`: Only log `first_n` number of times. Negative numbers log always; + this is the default. +* `summarize`: Only print this many entries of each tensor. If None, then a + maximum of 3 elements are printed per input tensor. +* `name`: A name for the operation (optional). + +##### Returns: + + Same tensor as `input_`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.SparseTensor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.SparseTensor.md deleted file mode 100644 index a999b3862f..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.SparseTensor.md +++ /dev/null @@ -1,143 +0,0 @@ -Represents a sparse tensor. - -Tensorflow represents a sparse tensor as three separate dense tensors: -`indices`, `values`, and `shape`. In Python, the three tensors are -collected into a `SparseTensor` class for ease of use. If you have separate -`indices`, `values`, and `shape` tensors, wrap them in a `SparseTensor` -object before passing to the ops below. - -Concretely, the sparse tensor `SparseTensor(indices, values, shape)` is - -* `indices`: A 2-D int64 tensor of shape `[N, ndims]`. -* `values`: A 1-D tensor of any type and shape `[N]`. -* `shape`: A 1-D int64 tensor of shape `[ndims]`. - -where `N` and `ndims` are the number of values, and number of dimensions in -the `SparseTensor` respectively. - -The corresponding dense tensor satisfies - -```python -dense.shape = shape -dense[tuple(indices[i])] = values[i] -``` - -By convention, `indices` should be sorted in row-major order (or equivalently -lexicographic order on the tuples `indices[i]`). This is not enforced when -`SparseTensor` objects are constructed, but most ops assume correct ordering. -If the ordering of sparse tensor `st` is wrong, a fixed version can be -obtained by calling `tf.sparse_reorder(st)`. - -Example: The sparse tensor - -```python -SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], shape=[3, 4]) -``` - -represents the dense tensor - -```python -[[1, 0, 0, 0] - [0, 0, 2, 0] - [0, 0, 0, 0]] -``` - -- - - - -#### `tf.SparseTensor.__init__(indices, values, shape)` {#SparseTensor.__init__} - -Creates a `SparseTensor`. - -##### Args: - - -* `indices`: A 2-D int64 tensor of shape `[N, ndims]`. -* `values`: A 1-D tensor of any type and shape `[N]`. -* `shape`: A 1-D int64 tensor of shape `[ndims]`. - -##### Returns: - - A `SparseTensor` - - -- - - - -#### `tf.SparseTensor.indices` {#SparseTensor.indices} - -The indices of non-zero values in the represented dense tensor. - -##### Returns: - - A 2-D Tensor of int64 with shape `[N, ndims]`, where `N` is the - number of non-zero values in the tensor, and `ndims` is the rank. - - -- - - - -#### `tf.SparseTensor.values` {#SparseTensor.values} - -The non-zero values in the represented dense tensor. - -##### Returns: - - A 1-D Tensor of any data type. - - -- - - - -#### `tf.SparseTensor.dtype` {#SparseTensor.dtype} - -The `DType` of elements in this tensor. - - -- - - - -#### `tf.SparseTensor.shape` {#SparseTensor.shape} - -A 1-D Tensor of int64 representing the shape of the dense tensor. - - -- - - - -#### `tf.SparseTensor.graph` {#SparseTensor.graph} - -The `Graph` that contains the index, value, and shape tensors. - - - -#### Other Methods -- - - - -#### `tf.SparseTensor.eval(feed_dict=None, session=None)` {#SparseTensor.eval} - -Evaluates this sparse tensor in a `Session`. - -Calling this method will execute all preceding operations that -produce the inputs needed for the operation that produces this -tensor. - -*N.B.* Before invoking `SparseTensor.eval()`, its graph must have been -launched in a session, and either a default session must be -available, or `session` must be specified explicitly. - -##### Args: - - -* `feed_dict`: A dictionary that maps `Tensor` objects to feed values. - See [`Session.run()`](../../api_docs/python/client.md#Session.run) for a - description of the valid feed values. -* `session`: (Optional.) The `Session` to be used to evaluate this sparse - tensor. If none, the default session will be used. - -##### Returns: - - A `SparseTensorValue` object. - - -- - - - -#### `tf.SparseTensor.from_value(cls, sparse_tensor_value)` {#SparseTensor.from_value} - - - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.TextLineReader.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.TextLineReader.md new file mode 100644 index 0000000000..ebb023a2fa --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.TextLineReader.md @@ -0,0 +1,148 @@ +A Reader that outputs the lines of a file delimited by newlines. + +Newlines are stripped from the output. +See ReaderBase for supported methods. +- - - + +#### `tf.TextLineReader.__init__(skip_header_lines=None, name=None)` {#TextLineReader.__init__} + +Create a TextLineReader. + +##### Args: + + +* `skip_header_lines`: An optional int. Defaults to 0. Number of lines + to skip from the beginning of every file. +* `name`: A name for the operation (optional). + + +- - - + +#### `tf.TextLineReader.num_records_produced(name=None)` {#TextLineReader.num_records_produced} + +Returns the number of records this reader has produced. + +This is the same as the number of Read executions that have +succeeded. + +##### Args: + + +* `name`: A name for the operation (optional). + +##### Returns: + + An int64 Tensor. + + +- - - + +#### `tf.TextLineReader.num_work_units_completed(name=None)` {#TextLineReader.num_work_units_completed} + +Returns the number of work units this reader has finished processing. + +##### Args: + + +* `name`: A name for the operation (optional). + +##### Returns: + + An int64 Tensor. + + +- - - + +#### `tf.TextLineReader.read(queue, name=None)` {#TextLineReader.read} + +Returns the next record (key, value pair) produced by a reader. + +Will dequeue a work unit from queue if necessary (e.g. when the +Reader needs to start reading from a new file since it has +finished with the previous file). + +##### Args: + + +* `queue`: A Queue or a mutable string Tensor representing a handle + to a Queue, with string work items. +* `name`: A name for the operation (optional). + +##### Returns: + + A tuple of Tensors (key, value). + +* `key`: A string scalar Tensor. +* `value`: A string scalar Tensor. + + +- - - + +#### `tf.TextLineReader.reader_ref` {#TextLineReader.reader_ref} + +Op that implements the reader. + + +- - - + +#### `tf.TextLineReader.reset(name=None)` {#TextLineReader.reset} + +Restore a reader to its initial clean state. + +##### Args: + + +* `name`: A name for the operation (optional). + +##### Returns: + + The created Operation. + + +- - - + +#### `tf.TextLineReader.restore_state(state, name=None)` {#TextLineReader.restore_state} + +Restore a reader to a previously saved state. + +Not all Readers support being restored, so this can produce an +Unimplemented error. + +##### Args: + + +* `state`: A string Tensor. + Result of a SerializeState of a Reader with matching type. +* `name`: A name for the operation (optional). + +##### Returns: + + The created Operation. + + +- - - + +#### `tf.TextLineReader.serialize_state(name=None)` {#TextLineReader.serialize_state} + +Produce a string tensor that encodes the state of a reader. + +Not all Readers support being serialized, so this can produce an +Unimplemented error. + +##### Args: + + +* `name`: A name for the operation (optional). + +##### Returns: + + A string Tensor. + + +- - - + +#### `tf.TextLineReader.supports_serialize` {#TextLineReader.supports_serialize} + +Whether the Reader implementation can serialize its state. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.VarLenFeature.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.VarLenFeature.md deleted file mode 100644 index a7b49bfcd6..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.VarLenFeature.md +++ /dev/null @@ -1,11 +0,0 @@ -Configuration for parsing a variable-length input feature. - -Fields: - dtype: Data type of input. -- - - - -#### `tf.VarLenFeature.dtype` {#VarLenFeature.dtype} - -Alias for field number 0 - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.add_n.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.add_n.md deleted file mode 100644 index c214a46057..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.add_n.md +++ /dev/null @@ -1,15 +0,0 @@ -### `tf.add_n(inputs, name=None)` {#add_n} - -Add all input tensors element wise. - -##### Args: - - -* `inputs`: A list of at least 1 `Tensor` objects of the same type in: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`. - Must all be the same size and shape. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `inputs`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.argmax.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.argmax.md new file mode 100644 index 0000000000..af0a2270a9 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.argmax.md @@ -0,0 +1,17 @@ +### `tf.argmax(input, dimension, name=None)` {#argmax} + +Returns the index with the largest value across dimensions of a tensor. + +##### Args: + + +* `input`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`. +* `dimension`: A `Tensor` of type `int32`. + int32, 0 <= dimension < rank(input). Describes which dimension + of the input Tensor to reduce across. For vectors, use dimension = 0. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `int64`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.argmin.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.argmin.md new file mode 100644 index 0000000000..002d5ed816 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.argmin.md @@ -0,0 +1,17 @@ +### `tf.argmin(input, dimension, name=None)` {#argmin} + +Returns the index with the smallest value across dimensions of a tensor. + +##### Args: + + +* `input`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`. +* `dimension`: A `Tensor` of type `int32`. + int32, 0 <= dimension < rank(input). Describes which dimension + of the input Tensor to reduce across. For vectors, use dimension = 0. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `int64`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.assert_positive.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.assert_positive.md deleted file mode 100644 index 8b727d6215..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.assert_positive.md +++ /dev/null @@ -1,33 +0,0 @@ -### `tf.assert_positive(x, data=None, summarize=None, name=None)` {#assert_positive} - -Assert the condition `x > 0` holds element-wise. - -Example of adding a dependency to an operation: - -```python -with tf.control_dependencies([tf.assert_positive(x)]): - output = tf.reduce_sum(x) -``` - -Example of adding dependency to the tensor being checked: - -```python -x = tf.with_dependencies([tf.assert_positive(x)], x) -``` - -Positive means, for every element `x[i]` of `x`, we have `x[i] > 0`. -If `x` is empty this is trivially satisfied. - -##### Args: - - -* `x`: Numeric `Tensor`. -* `data`: The tensors to print out if the condition is False. Defaults to - error message and first few entries of `x`. -* `summarize`: Print this many entries of each tensor. -* `name`: A name for this operation (optional). Defaults to "assert_positive". - -##### Returns: - - Op raising `InvalidArgumentError` unless `x` is all positive. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.assert_proper_iterable.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.assert_proper_iterable.md new file mode 100644 index 0000000000..ba01073765 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.assert_proper_iterable.md @@ -0,0 +1,18 @@ +### `tf.assert_proper_iterable(values)` {#assert_proper_iterable} + +Static assert that values is a "proper" iterable. + +`Ops` that expect iterables of `Tensor` can call this to validate input. +Useful since `Tensor`, `ndarray`, byte/text type are all iterables themselves. + +##### Args: + + +* `values`: Object to be checked. + +##### Raises: + + +* `TypeError`: If `values` is not iterable or is one of + `Tensor`, `SparseTensor`, `np.array`, `tf.compat.bytes_or_text_types`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.audio_summary.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.audio_summary.md new file mode 100644 index 0000000000..a592378b88 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.audio_summary.md @@ -0,0 +1,35 @@ +### `tf.audio_summary(tag, tensor, sample_rate, max_outputs=3, collections=None, name=None)` {#audio_summary} + +Outputs a `Summary` protocol buffer with audio. + +The summary has up to `max_outputs` summary values containing audio. The +audio is built from `tensor` which must be 3-D with shape `[batch_size, +frames, channels]` or 2-D with shape `[batch_size, frames]`. The values are +assumed to be in the range of `[-1.0, 1.0]` with a sample rate of +`sample_rate`. + +The `tag` argument is a scalar `Tensor` of type `string`. It is used to +build the `tag` of the summary values: + +* If `max_outputs` is 1, the summary value tag is '*tag*/audio'. +* If `max_outputs` is greater than 1, the summary value tags are + generated sequentially as '*tag*/audio/0', '*tag*/audio/1', etc. + +##### Args: + + +* `tag`: A scalar `Tensor` of type `string`. Used to build the `tag` + of the summary values. +* `tensor`: A 3-D `float32` `Tensor` of shape `[batch_size, frames, channels]` + or a 2-D `float32` `Tensor` of shape `[batch_size, frames]`. +* `sample_rate`: The sample rate of the signal in hertz. +* `max_outputs`: Max number of batch elements to generate audio for. +* `collections`: Optional list of ops.GraphKeys. The collections to add the + summary to. Defaults to [ops.GraphKeys.SUMMARIES] +* `name`: A name for the operation (optional). + +##### Returns: + + A scalar `Tensor` of type `string`. The serialized `Summary` protocol + buffer. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.batch_ifft.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.batch_ifft.md new file mode 100644 index 0000000000..c4b865425b --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.batch_ifft.md @@ -0,0 +1,18 @@ +### `tf.batch_ifft(input, name=None)` {#batch_ifft} + +Compute the inverse 1-dimensional discrete Fourier Transform over the inner-most + +dimension of `input`. + +##### Args: + + +* `input`: A `Tensor` of type `complex64`. A complex64 tensor. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `complex64`. + A complex64 tensor of the same shape as `input`. The inner-most + dimension of `input` is replaced with its inverse 1D Fourier Transform. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.batch_matrix_diag.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.batch_matrix_diag.md deleted file mode 100644 index 6e5458ba6c..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.batch_matrix_diag.md +++ /dev/null @@ -1,42 +0,0 @@ -### `tf.batch_matrix_diag(diagonal, name=None)` {#batch_matrix_diag} - -Returns a batched diagonal tensor with a given batched diagonal values. - -Given a `diagonal`, this operation returns a tensor with the `diagonal` and -everything else padded with zeros. The diagonal is computed as follows: - -Assume `diagonal` has `k` dimensions `[I, J, K, ..., N]`, then the output is a -tensor of rank `k+1` with dimensions [I, J, K, ..., N, N]` where: - -`output[i, j, k, ..., m, n] = 1{m=n} * diagonal[i, j, k, ..., n]`. - -For example: - -```prettyprint -# 'diagonal' is [[1, 2, 3, 4], [5, 6, 7, 8]] - -and diagonal.shape = (2, 4) - -tf.batch_matrix_diag(diagonal) ==> [[[1, 0, 0, 0] - [0, 2, 0, 0] - [0, 0, 3, 0] - [0, 0, 0, 4]], - [[5, 0, 0, 0] - [0, 6, 0, 0] - [0, 0, 7, 0] - [0, 0, 0, 8]]] - -which has shape (2, 4, 4) -``` - -##### Args: - - -* `diagonal`: A `Tensor`. Rank `k`, where `k >= 1`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `diagonal`. - Rank `k+1`, with `output.shape = diagonal.shape + [diagonal.shape[-1]]`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.batch_matrix_diag_part.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.batch_matrix_diag_part.md new file mode 100644 index 0000000000..0eb431d7a9 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.batch_matrix_diag_part.md @@ -0,0 +1,46 @@ +### `tf.batch_matrix_diag_part(input, name=None)` {#batch_matrix_diag_part} + +Returns the batched diagonal part of a batched tensor. + +This operation returns a tensor with the `diagonal` part +of the batched `input`. The `diagonal` part is computed as follows: + +Assume `input` has `k` dimensions `[I, J, K, ..., N, N]`, then the output is a +tensor of rank `k - 1` with dimensions `[I, J, K, ..., N]` where: + +`diagonal[i, j, k, ..., n] = input[i, j, k, ..., n, n]`. + +The input must be at least a matrix. + +For example: + +```prettyprint +# 'input' is [[[1, 0, 0, 0] + [0, 2, 0, 0] + [0, 0, 3, 0] + [0, 0, 0, 4]], + [[5, 0, 0, 0] + [0, 6, 0, 0] + [0, 0, 7, 0] + [0, 0, 0, 8]]] + +and input.shape = (2, 4, 4) + +tf.batch_matrix_diag_part(input) ==> [[1, 2, 3, 4], [5, 6, 7, 8]] + +which has shape (2, 4) +``` + +##### Args: + + +* `input`: A `Tensor`. + Rank `k` tensor where `k >= 2` and the last two dimensions are equal. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `input`. + The extracted diagonal(s) having shape + `diagonal.shape = input.shape[:-1]`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.batch_matrix_solve.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.batch_matrix_solve.md new file mode 100644 index 0000000000..f75ea79bc5 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.batch_matrix_solve.md @@ -0,0 +1,27 @@ +### `tf.batch_matrix_solve(matrix, rhs, adjoint=None, name=None)` {#batch_matrix_solve} + +Solves systems of linear equations. Checks for invertibility. + +Matrix is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions +form square matrices. Rhs is a tensor of shape +`[..., M, K]`. The output is a tensor shape `[..., M, K]`. If `adjoint` is `False` then each output +matrix satisfies `matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]`. +If `adjoint` is `True` then each output +matrix satisfies `adjoint(matrix[..., :, :]) * output[..., :, :] = rhs[..., :, :]`. + +##### Args: + + +* `matrix`: A `Tensor`. Must be one of the following types: `float64`, `float32`. + Shape is `[..., M, M]`. +* `rhs`: A `Tensor`. Must have the same type as `matrix`. + Shape is `[..., M, K]`. +* `adjoint`: An optional `bool`. Defaults to `False`. + Boolean indicating whether to solve with `matrix` or its (block-wise) + adjoint. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `matrix`. Shape is `[..., M, K]`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.batch_matrix_solve_ls.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.batch_matrix_solve_ls.md deleted file mode 100644 index 2b33669fa2..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.batch_matrix_solve_ls.md +++ /dev/null @@ -1,56 +0,0 @@ -### `tf.batch_matrix_solve_ls(matrix, rhs, l2_regularizer=0.0, fast=True, name=None)` {#batch_matrix_solve_ls} - -Solves multiple linear least-squares problems. - -`matrix` is a tensor of shape `[..., M, N]` whose inner-most 2 dimensions -form `M`-by-`N` matrices. Rhs is a tensor of shape `[..., M, K]` whose -inner-most 2 dimensions form `M`-by-`K` matrices. The computed output is a -`Tensor` of shape `[..., N, K]` whose inner-most 2 dimensions form `M`-by-`K` -matrices that solve the equations -`matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]` in the least squares -sense. - -Below we will use the following notation for each pair of -matrix and right-hand sides in the batch: - -`matrix`=\\(A \in \Re^{m \times n}\\), -`rhs`=\\(B \in \Re^{m \times k}\\), -`output`=\\(X \in \Re^{n \times k}\\), -`l2_regularizer`=\\(\lambda\\). - -If `fast` is `True`, then the solution is computed by solving the normal -equations using Cholesky decomposition. Specifically, if \\(m \ge n\\) then -\\(X = (A^T A + \lambda I)^{-1} A^T B\\), which solves the least-squares -problem \\(X = \mathrm{argmin}_{Z \in \Re^{n \times k}} ||A Z - B||_F^2 + -\lambda ||Z||_F^2\\). If \\(m \lt n\\) then `output` is computed as -\\(X = A^T (A A^T + \lambda I)^{-1} B\\), which (for \\(\lambda = 0\\)) is -the minimum-norm solution to the under-determined linear system, i.e. -\\(X = \mathrm{argmin}_{Z \in \Re^{n \times k}} ||Z||_F^2 \\), subject to -\\(A Z = B\\). Notice that the fast path is only numerically stable when -\\(A\\) is numerically full rank and has a condition number -\\(\mathrm{cond}(A) \lt \frac{1}{\sqrt{\epsilon_{mach}}}\\) or\\(\lambda\\) -is sufficiently large. - -If `fast` is `False` an algorithm based on the numerically robust complete -orthogonal decomposition is used. This computes the minimum-norm -least-squares solution, even when \\(A\\) is rank deficient. This path is -typically 6-7 times slower than the fast path. If `fast` is `False` then -`l2_regularizer` is ignored. - -##### Args: - - -* `matrix`: `Tensor` of shape `[..., M, N]`. -* `rhs`: `Tensor` of shape `[..., M, K]`. -* `l2_regularizer`: 0-D `double` `Tensor`. Ignored if `fast=False`. -* `fast`: bool. Defaults to `True`. -* `name`: string, optional name of the operation. - -##### Returns: - - -* `output`: `Tensor` of shape `[..., N, K]` whose inner-most 2 dimensions form - `M`-by-`K` matrices that solve the equations - `matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]` in the least - squares sense. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.batch_matrix_triangular_solve.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.batch_matrix_triangular_solve.md deleted file mode 100644 index 297e19088d..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.batch_matrix_triangular_solve.md +++ /dev/null @@ -1,39 +0,0 @@ -### `tf.batch_matrix_triangular_solve(matrix, rhs, lower=None, adjoint=None, name=None)` {#batch_matrix_triangular_solve} - -Solves systems of linear equations with upper or lower triangular matrices by - -backsubstitution. - -`matrix` is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions form -square matrices. If `lower` is `True` then the strictly upper triangular part -of each inner-most matrix is assumed to be zero and not accessed. -If `lower` is False then the strictly lower triangular part of each inner-most -matrix is assumed to be zero and not accessed. -`rhs` is a tensor of shape [..., M, K]`. - -The output is a tensor of shape `[..., M, K]`. If `adjoint` is `True` then the -innermost matrices in output` satisfy matrix equations -`matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]`. -If `adjoint` is `False` then the strictly then the innermost matrices in -`output` satisfy matrix equations -`adjoint(matrix[..., i, k]) * output[..., k, j] = rhs[..., i, j]`. - -##### Args: - - -* `matrix`: A `Tensor`. Must be one of the following types: `float64`, `float32`. - Shape is `[..., M, M]`. -* `rhs`: A `Tensor`. Must have the same type as `matrix`. - Shape is `[..., M, K]`. -* `lower`: An optional `bool`. Defaults to `True`. - Boolean indicating whether the innermost matrices in `matrix` are - lower or upper triangular. -* `adjoint`: An optional `bool`. Defaults to `False`. - Boolean indicating whether to solve with `matrix` or its (block-wise) - adjoint. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `matrix`. Shape is `[..., M, K]`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.clip_by_average_norm.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.clip_by_average_norm.md deleted file mode 100644 index 4598e183d8..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.clip_by_average_norm.md +++ /dev/null @@ -1,29 +0,0 @@ -### `tf.clip_by_average_norm(t, clip_norm, name=None)` {#clip_by_average_norm} - -Clips tensor values to a maximum average L2-norm. - -Given a tensor `t`, and a maximum clip value `clip_norm`, this operation -normalizes `t` so that its average L2-norm is less than or equal to -`clip_norm`. Specifically, if the average L2-norm is already less than or -equal to `clip_norm`, then `t` is not modified. If the average L2-norm is -greater than `clip_norm`, then this operation returns a tensor of the same -type and shape as `t` with its values set to: - -`t * clip_norm / l2norm_avg(t)` - -In this case, the average L2-norm of the output tensor is `clip_norm`. - -This operation is typically used to clip gradients before applying them with -an optimizer. - -##### Args: - - -* `t`: A `Tensor`. -* `clip_norm`: A 0-D (scalar) `Tensor` > 0. A maximum clipping value. -* `name`: A name for the operation (optional). - -##### Returns: - - A clipped `Tensor`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.clip_by_global_norm.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.clip_by_global_norm.md new file mode 100644 index 0000000000..a40f621bf4 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.clip_by_global_norm.md @@ -0,0 +1,50 @@ +### `tf.clip_by_global_norm(t_list, clip_norm, use_norm=None, name=None)` {#clip_by_global_norm} + +Clips values of multiple tensors by the ratio of the sum of their norms. + +Given a tuple or list of tensors `t_list`, and a clipping ratio `clip_norm`, +this operation returns a list of clipped tensors `list_clipped` +and the global norm (`global_norm`) of all tensors in `t_list`. Optionally, +if you've already computed the global norm for `t_list`, you can specify +the global norm with `use_norm`. + +To perform the clipping, the values `t_list[i]` are set to: + + t_list[i] * clip_norm / max(global_norm, clip_norm) + +where: + + global_norm = sqrt(sum([l2norm(t)**2 for t in t_list])) + +If `clip_norm > global_norm` then the entries in `t_list` remain as they are, +otherwise they're all shrunk by the global ratio. + +Any of the entries of `t_list` that are of type `None` are ignored. + +This is the correct way to perform gradient clipping (for example, see +[Pascanu et al., 2012](http://arxiv.org/abs/1211.5063) +([pdf](http://arxiv.org/pdf/1211.5063.pdf))). + +However, it is slower than `clip_by_norm()` because all the parameters must be +ready before the clipping operation can be performed. + +##### Args: + + +* `t_list`: A tuple or list of mixed `Tensors`, `IndexedSlices`, or None. +* `clip_norm`: A 0-D (scalar) `Tensor` > 0. The clipping ratio. +* `use_norm`: A 0-D (scalar) `Tensor` of type `float` (optional). The global + norm to use. If not provided, `global_norm()` is used to compute the norm. +* `name`: A name for the operation (optional). + +##### Returns: + + +* `list_clipped`: A list of `Tensors` of the same type as `list_t`. +* `global_norm`: A 0-D (scalar) `Tensor` representing the global norm. + +##### Raises: + + +* `TypeError`: If `t_list` is not a sequence. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.clip_by_norm.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.clip_by_norm.md deleted file mode 100644 index a393375986..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.clip_by_norm.md +++ /dev/null @@ -1,29 +0,0 @@ -### `tf.clip_by_norm(t, clip_norm, name=None)` {#clip_by_norm} - -Clips tensor values to a maximum L2-norm. - -Given a tensor `t`, and a maximum clip value `clip_norm`, this operation -normalizes `t` so that its L2-norm is less than or equal to `clip_norm`. -Specifically, if the L2-norm is already less than or equal to `clip_norm`, -then `t` is not modified. If the L2-norm is greater than `clip_norm`, then -this operation returns a tensor of the same type and shape as `t` with its -values set to: - -`t * clip_norm / l2norm(t)` - -In this case, the L2-norm of the output tensor is `clip_norm`. - -This operation is typically used to clip gradients before applying them with -an optimizer. - -##### Args: - - -* `t`: A `Tensor`. -* `clip_norm`: A 0-D (scalar) `Tensor` > 0. A maximum clipping value. -* `name`: A name for the operation (optional). - -##### Returns: - - A clipped `Tensor`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.constant_initializer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.constant_initializer.md deleted file mode 100644 index 4ac524d708..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.constant_initializer.md +++ /dev/null @@ -1,20 +0,0 @@ -### `tf.constant_initializer(value=0.0, dtype=tf.float32)` {#constant_initializer} - -Returns an initializer that generates tensors with a single value. - -##### Args: - - -* `value`: A Python scalar. All elements of the initialized variable - will be set to this value. -* `dtype`: The data type. Only floating point types are supported. - -##### Returns: - - An initializer that generates tensors with a single value. - -##### Raises: - - -* `ValueError`: if `dtype` is not a floating point type. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.ContinuousDistribution.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.ContinuousDistribution.md deleted file mode 100644 index e474870cd4..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.ContinuousDistribution.md +++ /dev/null @@ -1,153 +0,0 @@ -Base class for continuous probability distributions. - -`ContinuousDistribution` defines the API for the likelihood functions `pdf` -and `log_pdf` of continuous probability distributions, and a property -`is_reparameterized` (returning `True` or `False`) which describes -whether the samples of this distribution are calculated in a differentiable -way from a non-parameterized distribution. For example, the `Normal` -distribution with parameters `mu` and `sigma` is reparameterized as - -```Normal(mu, sigma) = sigma * Normal(0, 1) + mu``` - -Subclasses must override `pdf` and `log_pdf` but one can call this base -class's implementation. They must also override the `is_reparameterized` -property. - -See `BaseDistribution` for more information on the API for probability -distributions. -- - - - -#### `tf.contrib.distributions.ContinuousDistribution.batch_shape(name=None)` {#ContinuousDistribution.batch_shape} - -Batch dimensions of this instance as a 1-D int32 `Tensor`. - -The product of the dimensions of the `batch_shape` is the number of -independent distributions of this kind the instance represents. - -##### Args: - - -* `name`: name to give to the op - -##### Returns: - - `Tensor` `batch_shape` - - -- - - - -#### `tf.contrib.distributions.ContinuousDistribution.cdf(value, name='cdf')` {#ContinuousDistribution.cdf} - -Cumulative distribution function. - - -- - - - -#### `tf.contrib.distributions.ContinuousDistribution.dtype` {#ContinuousDistribution.dtype} - -dtype of samples from this distribution. - - -- - - - -#### `tf.contrib.distributions.ContinuousDistribution.entropy(name=None)` {#ContinuousDistribution.entropy} - -Entropy of the distribution in nats. - - -- - - - -#### `tf.contrib.distributions.ContinuousDistribution.event_shape(name=None)` {#ContinuousDistribution.event_shape} - -Shape of a sample from a single distribution as a 1-D int32 `Tensor`. - -##### Args: - - -* `name`: name to give to the op - -##### Returns: - - `Tensor` `event_shape` - - -- - - - -#### `tf.contrib.distributions.ContinuousDistribution.get_batch_shape()` {#ContinuousDistribution.get_batch_shape} - -`TensorShape` available at graph construction time. - -Same meaning as `batch_shape`. May be only partially defined. - - -- - - - -#### `tf.contrib.distributions.ContinuousDistribution.get_event_shape()` {#ContinuousDistribution.get_event_shape} - -`TensorShape` available at graph construction time. - -Same meaning as `event_shape`. May be only partially defined. - - -- - - - -#### `tf.contrib.distributions.ContinuousDistribution.is_reparameterized` {#ContinuousDistribution.is_reparameterized} - - - - -- - - - -#### `tf.contrib.distributions.ContinuousDistribution.log_cdf(value, name='log_cdf')` {#ContinuousDistribution.log_cdf} - -Log CDF. - - -- - - - -#### `tf.contrib.distributions.ContinuousDistribution.log_pdf(value, name='log_pdf')` {#ContinuousDistribution.log_pdf} - -Log of the probability density function. - - -- - - - -#### `tf.contrib.distributions.ContinuousDistribution.mean` {#ContinuousDistribution.mean} - - - - -- - - - -#### `tf.contrib.distributions.ContinuousDistribution.name` {#ContinuousDistribution.name} - -Name to prepend to all ops. - - -- - - - -#### `tf.contrib.distributions.ContinuousDistribution.pdf(value, name='pdf')` {#ContinuousDistribution.pdf} - -Probability density function. - - -- - - - -#### `tf.contrib.distributions.ContinuousDistribution.sample(n, seed=None, name=None)` {#ContinuousDistribution.sample} - -Generate `n` samples. - -##### Args: - - -* `n`: scalar. Number of samples to draw from each distribution. -* `seed`: Python integer seed for RNG -* `name`: name to give to the op. - -##### Returns: - - -* `samples`: a `Tensor` of shape `(n,) + self.batch_shape + self.event_shape` - with values of type `self.dtype`. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.DirichletMultinomial.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.DirichletMultinomial.md deleted file mode 100644 index 1d8cb6a6dd..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.DirichletMultinomial.md +++ /dev/null @@ -1,185 +0,0 @@ -DirichletMultinomial mixture distribution. - -This distribution is parameterized by a vector `alpha` of concentration -parameters for `k` classes. - -#### Mathematical details - -The Dirichlet Multinomial is a distribution over k-class count data, meaning -for each k-tuple of non-negative integer `counts = [c_1,...,c_k]`, we have a -probability of these draws being made from the distribution. The distribution -has hyperparameters `alpha = (alpha_1,...,alpha_k)`, and probability mass -function (pmf): - -```pmf(counts) = C! / (c_1!...c_k!) * Beta(alpha + c) / Beta(alpha)``` - -where above `C = sum_j c_j`, `N!` is `N` factorial, and -`Beta(x) = prod_j Gamma(x_j) / Gamma(sum_j x_j)` is the multivariate beta -function. - -This is a mixture distribution in that `N` samples can be produced by: - 1. Choose class probabilities `p = (p_1,...,p_k) ~ Dir(alpha)` - 2. Draw integers `m = (m_1,...,m_k) ~ Multinomial(p, N)` - -This class provides methods to create indexed batches of Dirichlet -Multinomial distributions. If the provided `alpha` is rank 2 or higher, for -every fixed set of leading dimensions, the last dimension represents one -single Dirichlet Multinomial distribution. When calling distribution -functions (e.g. `dist.pdf(counts)`), `alpha` and `counts` are broadcast to the -same shape (if possible). In all cases, the last dimension of alpha/counts -represents single Dirichlet Multinomial distributions. - -#### Examples - -```python -alpha = [1, 2, 3] -dist = DirichletMultinomial(alpha) -``` - -Creates a 3-class distribution, with the 3rd class is most likely to be drawn. -The distribution functions can be evaluated on counts. - -```python -# counts same shape as alpha. -counts = [0, 2, 0] -dist.pdf(counts) # Shape [] - -# alpha will be broadcast to [[1, 2, 3], [1, 2, 3]] to match counts. -counts = [[11, 22, 33], [44, 55, 66]] -dist.pdf(counts) # Shape [2] - -# alpha will be broadcast to shape [5, 7, 3] to match counts. -counts = [[...]] # Shape [5, 7, 3] -dist.pdf(counts) # Shape [5, 7] -``` - -Creates a 2-batch of 3-class distributions. - -```python -alpha = [[1, 2, 3], [4, 5, 6]] # Shape [2, 3] -dist = DirichletMultinomial(alpha) - -# counts will be broadcast to [[11, 22, 33], [11, 22, 33]] to match alpha. -counts = [11, 22, 33] -dist.pdf(counts) # Shape [2] -``` -- - - - -#### `tf.contrib.distributions.DirichletMultinomial.__init__(alpha)` {#DirichletMultinomial.__init__} - -Initialize a batch of DirichletMultinomial distributions. - -##### Args: - - -* `alpha`: Shape `[N1,..., Nn, k]` positive `float` or `double` tensor with - `n >= 0`. Defines this as a batch of `N1 x ... x Nn` different `k` - class Dirichlet multinomial distributions. - - -* `Examples`: - -```python -# Define 1-batch of 2-class Dirichlet multinomial distribution, -# also known as a beta-binomial. -dist = DirichletMultinomial([1.1, 2.0]) - -# Define a 2-batch of 3-class distributions. -dist = DirichletMultinomial([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]) -``` - - -- - - - -#### `tf.contrib.distributions.DirichletMultinomial.alpha` {#DirichletMultinomial.alpha} - -Parameters defining this distribution. - - -- - - - -#### `tf.contrib.distributions.DirichletMultinomial.cdf(x)` {#DirichletMultinomial.cdf} - - - - -- - - - -#### `tf.contrib.distributions.DirichletMultinomial.dtype` {#DirichletMultinomial.dtype} - - - - -- - - - -#### `tf.contrib.distributions.DirichletMultinomial.log_cdf(x)` {#DirichletMultinomial.log_cdf} - - - - -- - - - -#### `tf.contrib.distributions.DirichletMultinomial.log_pmf(counts, name=None)` {#DirichletMultinomial.log_pmf} - -`Log(P[counts])`, computed for every batch member. - -For each batch of counts `[c_1,...,c_k]`, `P[counts]` is the probability -that after sampling `sum_j c_j` draws from this Dirichlet Multinomial -distribution, the number of draws falling in class `j` is `c_j`. Note that -different sequences of draws can result in the same counts, thus the -probability includes a combinatorial coefficient. - -##### Args: - - -* `counts`: Non-negative `float`, `double`, or `int` tensor whose shape can - be broadcast with `self.alpha`. For fixed leading dimensions, the last - dimension represents counts for the corresponding Dirichlet Multinomial - distribution in `self.alpha`. -* `name`: Name to give this Op, defaults to "log_pmf". - -##### Returns: - - Log probabilities for each record, shape `[N1,...,Nn]`. - - -- - - - -#### `tf.contrib.distributions.DirichletMultinomial.mean` {#DirichletMultinomial.mean} - -Class means for every batch member. - - -- - - - -#### `tf.contrib.distributions.DirichletMultinomial.num_classes` {#DirichletMultinomial.num_classes} - -Tensor providing number of classes in each batch member. - - -- - - - -#### `tf.contrib.distributions.DirichletMultinomial.pmf(counts, name=None)` {#DirichletMultinomial.pmf} - -`P[counts]`, computed for every batch member. - -For each batch of counts `[c_1,...,c_k]`, `P[counts]` is the probability -that after sampling `sum_j c_j` draws from this Dirichlet Multinomial -distribution, the number of draws falling in class `j` is `c_j`. Note that -different sequences of draws can result in the same counts, thus the -probability includes a combinatorial coefficient. - -##### Args: - - -* `counts`: Non-negative `float`, `double`, or `int` tensor whose shape can - be broadcast with `self.alpha`. For fixed leading dimensions, the last - dimension represents counts for the corresponding Dirichlet Multinomial - distribution in `self.alpha`. -* `name`: Name to give this Op, defaults to "pmf". - -##### Returns: - - Probabilities for each record, shape `[N1,...,Nn]`. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.Exponential.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.Exponential.md new file mode 100644 index 0000000000..62181034b9 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.Exponential.md @@ -0,0 +1,260 @@ +The Exponential distribution with rate parameter lam. + +The PDF of this distribution is: + +```pdf(x) = (lam * e^(-lam * x)), x > 0``` + +Note that the Exponential distribution is a special case of the Gamma +distribution, with Exponential(lam) = Gamma(1, lam). +- - - + +#### `tf.contrib.distributions.Exponential.__init__(lam, name='Exponential')` {#Exponential.__init__} + + + + +- - - + +#### `tf.contrib.distributions.Exponential.alpha` {#Exponential.alpha} + +Shape parameter. + + +- - - + +#### `tf.contrib.distributions.Exponential.batch_shape(name='batch_shape')` {#Exponential.batch_shape} + +Batch dimensions of this instance as a 1-D int32 `Tensor`. + +The product of the dimensions of the `batch_shape` is the number of +independent distributions of this kind the instance represents. + +##### Args: + + +* `name`: name to give to the op + +##### Returns: + + `Tensor` `batch_shape` + + +- - - + +#### `tf.contrib.distributions.Exponential.beta` {#Exponential.beta} + +Inverse scale parameter. + + +- - - + +#### `tf.contrib.distributions.Exponential.cdf(x, name='cdf')` {#Exponential.cdf} + +CDF of observations `x` under these Gamma distribution(s). + +##### Args: + + +* `x`: tensor of dtype `dtype`, must be broadcastable with `alpha` and `beta`. +* `name`: The name to give this op. + +##### Returns: + + +* `cdf`: tensor of dtype `dtype`, the CDFs of `x`. + + +- - - + +#### `tf.contrib.distributions.Exponential.dtype` {#Exponential.dtype} + +dtype of samples from this distribution. + + +- - - + +#### `tf.contrib.distributions.Exponential.entropy(name='entropy')` {#Exponential.entropy} + +The entropy of Gamma distribution(s). + +This is defined to be + +``` +entropy = alpha - log(beta) + log(Gamma(alpha)) + + (1-alpha)digamma(alpha) +``` + +where digamma(alpha) is the digamma function. + +##### Args: + + +* `name`: The name to give this op. + +##### Returns: + + +* `entropy`: tensor of dtype `dtype`, the entropy. + + +- - - + +#### `tf.contrib.distributions.Exponential.event_shape(name='event_shape')` {#Exponential.event_shape} + +Shape of a sample from a single distribution as a 1-D int32 `Tensor`. + +##### Args: + + +* `name`: name to give to the op + +##### Returns: + + `Tensor` `event_shape` + + +- - - + +#### `tf.contrib.distributions.Exponential.get_batch_shape()` {#Exponential.get_batch_shape} + +`TensorShape` available at graph construction time. + +Same meaning as `batch_shape`. May be only partially defined. + +##### Returns: + + `TensorShape` object. + + +- - - + +#### `tf.contrib.distributions.Exponential.get_event_shape()` {#Exponential.get_event_shape} + +`TensorShape` available at graph construction time. + +Same meaning as `event_shape`. May be only partially defined. + +##### Returns: + + `TensorShape` object. + + +- - - + +#### `tf.contrib.distributions.Exponential.is_reparameterized` {#Exponential.is_reparameterized} + + + + +- - - + +#### `tf.contrib.distributions.Exponential.lam` {#Exponential.lam} + + + + +- - - + +#### `tf.contrib.distributions.Exponential.log_cdf(x, name='log_cdf')` {#Exponential.log_cdf} + +Log CDF of observations `x` under these Gamma distribution(s). + +##### Args: + + +* `x`: tensor of dtype `dtype`, must be broadcastable with `alpha` and `beta`. +* `name`: The name to give this op. + +##### Returns: + + +* `log_cdf`: tensor of dtype `dtype`, the log-CDFs of `x`. + + +- - - + +#### `tf.contrib.distributions.Exponential.log_pdf(x, name='log_pdf')` {#Exponential.log_pdf} + +Log pdf of observations in `x` under these Gamma distribution(s). + +##### Args: + + +* `x`: tensor of dtype `dtype`, must be broadcastable with `alpha` and `beta`. +* `name`: The name to give this op. + +##### Returns: + + +* `log_pdf`: tensor of dtype `dtype`, the log-PDFs of `x`. + +##### Raises: + + +* `TypeError`: if `x` and `alpha` are different dtypes. + + +- - - + +#### `tf.contrib.distributions.Exponential.mean` {#Exponential.mean} + +Mean of each batch member. + + +- - - + +#### `tf.contrib.distributions.Exponential.name` {#Exponential.name} + +Name to prepend to all ops. + + +- - - + +#### `tf.contrib.distributions.Exponential.pdf(x, name='pdf')` {#Exponential.pdf} + +Pdf of observations in `x` under these Gamma distribution(s). + +##### Args: + + +* `x`: tensor of dtype `dtype`, must be broadcastable with `alpha` and `beta`. +* `name`: The name to give this op. + +##### Returns: + + +* `pdf`: tensor of dtype `dtype`, the PDFs of `x` + +##### Raises: + + +* `TypeError`: if `x` and `alpha` are different dtypes. + + +- - - + +#### `tf.contrib.distributions.Exponential.sample(n, seed=None, name=None)` {#Exponential.sample} + +Sample `n` observations from the Exponential Distributions. + +##### Args: + + +* `n`: `Scalar`, type int32, the number of observations to sample. +* `seed`: Python integer, the random seed. +* `name`: The name to give this op. + +##### Returns: + + +* `samples`: `[n, ...]`, a `Tensor` of `n` samples for each + of the distributions determined by the hyperparameters. + + +- - - + +#### `tf.contrib.distributions.Exponential.variance` {#Exponential.variance} + +Variance of each batch member. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.Gamma.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.Gamma.md new file mode 100644 index 0000000000..5a7bbea7ae --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.Gamma.md @@ -0,0 +1,284 @@ +The `Gamma` distribution with parameter alpha and beta. + +The parameters are the shape and inverse scale parameters alpha, beta. + +The PDF of this distribution is: + +```pdf(x) = (beta^alpha)(x^(alpha-1))e^(-x*beta)/Gamma(alpha), x > 0``` + +and the CDF of this distribution is: + +```cdf(x) = GammaInc(alpha, beta * x) / Gamma(alpha), x > 0``` + +where GammaInc is the incomplete lower Gamma function. + +Examples: + +```python +dist = Gamma(alpha=3.0, beta=2.0) +dist2 = Gamma(alpha=[3.0, 4.0], beta=[2.0, 3.0]) +``` +- - - + +#### `tf.contrib.distributions.Gamma.__init__(alpha, beta, name='Gamma')` {#Gamma.__init__} + +Construct Gamma distributions with parameters `alpha` and `beta`. + +The parameters `alpha` and `beta` must be shaped in a way that supports +broadcasting (e.g. `alpha + beta` is a valid operation). + +##### Args: + + +* `alpha`: `float` or `double` tensor, the shape params of the + distribution(s). + alpha must contain only positive values. +* `beta`: `float` or `double` tensor, the inverse scale params of the + distribution(s). + beta must contain only positive values. +* `name`: The name to prepend to all ops created by this distribution. + +##### Raises: + + +* `TypeError`: if `alpha` and `beta` are different dtypes. + + +- - - + +#### `tf.contrib.distributions.Gamma.alpha` {#Gamma.alpha} + +Shape parameter. + + +- - - + +#### `tf.contrib.distributions.Gamma.batch_shape(name='batch_shape')` {#Gamma.batch_shape} + +Batch dimensions of this instance as a 1-D int32 `Tensor`. + +The product of the dimensions of the `batch_shape` is the number of +independent distributions of this kind the instance represents. + +##### Args: + + +* `name`: name to give to the op + +##### Returns: + + `Tensor` `batch_shape` + + +- - - + +#### `tf.contrib.distributions.Gamma.beta` {#Gamma.beta} + +Inverse scale parameter. + + +- - - + +#### `tf.contrib.distributions.Gamma.cdf(x, name='cdf')` {#Gamma.cdf} + +CDF of observations `x` under these Gamma distribution(s). + +##### Args: + + +* `x`: tensor of dtype `dtype`, must be broadcastable with `alpha` and `beta`. +* `name`: The name to give this op. + +##### Returns: + + +* `cdf`: tensor of dtype `dtype`, the CDFs of `x`. + + +- - - + +#### `tf.contrib.distributions.Gamma.dtype` {#Gamma.dtype} + +dtype of samples from this distribution. + + +- - - + +#### `tf.contrib.distributions.Gamma.entropy(name='entropy')` {#Gamma.entropy} + +The entropy of Gamma distribution(s). + +This is defined to be + +``` +entropy = alpha - log(beta) + log(Gamma(alpha)) + + (1-alpha)digamma(alpha) +``` + +where digamma(alpha) is the digamma function. + +##### Args: + + +* `name`: The name to give this op. + +##### Returns: + + +* `entropy`: tensor of dtype `dtype`, the entropy. + + +- - - + +#### `tf.contrib.distributions.Gamma.event_shape(name='event_shape')` {#Gamma.event_shape} + +Shape of a sample from a single distribution as a 1-D int32 `Tensor`. + +##### Args: + + +* `name`: name to give to the op + +##### Returns: + + `Tensor` `event_shape` + + +- - - + +#### `tf.contrib.distributions.Gamma.get_batch_shape()` {#Gamma.get_batch_shape} + +`TensorShape` available at graph construction time. + +Same meaning as `batch_shape`. May be only partially defined. + +##### Returns: + + `TensorShape` object. + + +- - - + +#### `tf.contrib.distributions.Gamma.get_event_shape()` {#Gamma.get_event_shape} + +`TensorShape` available at graph construction time. + +Same meaning as `event_shape`. May be only partially defined. + +##### Returns: + + `TensorShape` object. + + +- - - + +#### `tf.contrib.distributions.Gamma.is_reparameterized` {#Gamma.is_reparameterized} + + + + +- - - + +#### `tf.contrib.distributions.Gamma.log_cdf(x, name='log_cdf')` {#Gamma.log_cdf} + +Log CDF of observations `x` under these Gamma distribution(s). + +##### Args: + + +* `x`: tensor of dtype `dtype`, must be broadcastable with `alpha` and `beta`. +* `name`: The name to give this op. + +##### Returns: + + +* `log_cdf`: tensor of dtype `dtype`, the log-CDFs of `x`. + + +- - - + +#### `tf.contrib.distributions.Gamma.log_pdf(x, name='log_pdf')` {#Gamma.log_pdf} + +Log pdf of observations in `x` under these Gamma distribution(s). + +##### Args: + + +* `x`: tensor of dtype `dtype`, must be broadcastable with `alpha` and `beta`. +* `name`: The name to give this op. + +##### Returns: + + +* `log_pdf`: tensor of dtype `dtype`, the log-PDFs of `x`. + +##### Raises: + + +* `TypeError`: if `x` and `alpha` are different dtypes. + + +- - - + +#### `tf.contrib.distributions.Gamma.mean` {#Gamma.mean} + +Mean of each batch member. + + +- - - + +#### `tf.contrib.distributions.Gamma.name` {#Gamma.name} + +Name to prepend to all ops. + + +- - - + +#### `tf.contrib.distributions.Gamma.pdf(x, name='pdf')` {#Gamma.pdf} + +Pdf of observations in `x` under these Gamma distribution(s). + +##### Args: + + +* `x`: tensor of dtype `dtype`, must be broadcastable with `alpha` and `beta`. +* `name`: The name to give this op. + +##### Returns: + + +* `pdf`: tensor of dtype `dtype`, the PDFs of `x` + +##### Raises: + + +* `TypeError`: if `x` and `alpha` are different dtypes. + + +- - - + +#### `tf.contrib.distributions.Gamma.sample(n, seed=None, name=None)` {#Gamma.sample} + +Generate `n` samples. + +##### Args: + + +* `n`: scalar. Number of samples to draw from each distribution. +* `seed`: Python integer seed for RNG +* `name`: name to give to the op. + +##### Returns: + + +* `samples`: a `Tensor` of shape `(n,) + self.batch_shape + self.event_shape` + with values of type `self.dtype`. + + +- - - + +#### `tf.contrib.distributions.Gamma.variance` {#Gamma.variance} + +Variance of each batch member. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.Normal.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.Normal.md deleted file mode 100644 index d15dd93a65..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.Normal.md +++ /dev/null @@ -1,209 +0,0 @@ -The scalar Normal distribution with mean and stddev parameters mu, sigma. - -#### Mathematical details - -The PDF of this distribution is: - -```f(x) = sqrt(1/(2*pi*sigma^2)) exp(-(x-mu)^2/(2*sigma^2))``` - -#### Examples - -Examples of initialization of one or a batch of distributions. - -```python -# Define a single scalar Normal distribution. -dist = tf.contrib.distributions.Normal(mu=0, sigma=3) - -# Evaluate the cdf at 1, returning a scalar. -dist.cdf(1) - -# Define a batch of two scalar valued Normals. -# The first has mean 1 and standard deviation 11, the second 2 and 22. -dist = tf.contrib.distributions.Normal(mu=[1, 2.], sigma=[11, 22.]) - -# Evaluate the pdf of the first distribution on 0, and the second on 1.5, -# returning a length two tensor. -dist.pdf([0, 1.5]) - -# Get 3 samples, returning a 3 x 2 tensor. -dist.sample(3) -``` - -Arguments are broadcast when possible. - -```python -# Define a batch of two scalar valued Normals. -# Both have mean 1, but different standard deviations. -dist = tf.contrib.distributions.Normal(mu=1, sigma=[11, 22.]) - -# Evaluate the pdf of both distributions on the same point, 3.0, -# returning a length 2 tensor. -dist.pdf(3.0) -``` -- - - - -#### `tf.contrib.distributions.Normal.__init__(mu, sigma, name=None)` {#Normal.__init__} - -Construct Normal distributions with mean and stddev `mu` and `sigma`. - -The parameters `mu` and `sigma` must be shaped in a way that supports -broadcasting (e.g. `mu + sigma` is a valid operation). - -##### Args: - - -* `mu`: `float` or `double` tensor, the means of the distribution(s). -* `sigma`: `float` or `double` tensor, the stddevs of the distribution(s). - sigma must contain only positive values. -* `name`: The name to give Ops created by the initializer. - -##### Raises: - - -* `TypeError`: if mu and sigma are different dtypes. - - -- - - - -#### `tf.contrib.distributions.Normal.cdf(x, name=None)` {#Normal.cdf} - -CDF of observations in `x` under these Normal distribution(s). - -##### Args: - - -* `x`: tensor of dtype `dtype`, must be broadcastable with `mu` and `sigma`. -* `name`: The name to give this op. - -##### Returns: - - -* `cdf`: tensor of dtype `dtype`, the CDFs of `x`. - - -- - - - -#### `tf.contrib.distributions.Normal.dtype` {#Normal.dtype} - - - - -- - - - -#### `tf.contrib.distributions.Normal.entropy(name=None)` {#Normal.entropy} - -The entropy of Normal distribution(s). - -##### Args: - - -* `name`: The name to give this op. - -##### Returns: - - -* `entropy`: tensor of dtype `dtype`, the entropy. - - -- - - - -#### `tf.contrib.distributions.Normal.is_reparameterized` {#Normal.is_reparameterized} - - - - -- - - - -#### `tf.contrib.distributions.Normal.log_cdf(x, name=None)` {#Normal.log_cdf} - -Log CDF of observations `x` under these Normal distribution(s). - -##### Args: - - -* `x`: tensor of dtype `dtype`, must be broadcastable with `mu` and `sigma`. -* `name`: The name to give this op. - -##### Returns: - - -* `log_cdf`: tensor of dtype `dtype`, the log-CDFs of `x`. - - -- - - - -#### `tf.contrib.distributions.Normal.log_pdf(x, name=None)` {#Normal.log_pdf} - -Log pdf of observations in `x` under these Normal distribution(s). - -##### Args: - - -* `x`: tensor of dtype `dtype`, must be broadcastable with `mu` and `sigma`. -* `name`: The name to give this op. - -##### Returns: - - -* `log_pdf`: tensor of dtype `dtype`, the log-PDFs of `x`. - - -- - - - -#### `tf.contrib.distributions.Normal.mean` {#Normal.mean} - - - - -- - - - -#### `tf.contrib.distributions.Normal.mu` {#Normal.mu} - - - - -- - - - -#### `tf.contrib.distributions.Normal.pdf(x, name=None)` {#Normal.pdf} - -The PDF of observations in `x` under these Normal distribution(s). - -##### Args: - - -* `x`: tensor of dtype `dtype`, must be broadcastable with `mu` and `sigma`. -* `name`: The name to give this op. - -##### Returns: - - -* `pdf`: tensor of dtype `dtype`, the pdf values of `x`. - - -- - - - -#### `tf.contrib.distributions.Normal.sample(n, seed=None, name=None)` {#Normal.sample} - -Sample `n` observations from the Normal Distributions. - -##### Args: - - -* `n`: `Scalar`, type int32, the number of observations to sample. -* `seed`: Python integer, the random seed. -* `name`: The name to give this op. - -##### Returns: - - -* `samples`: `[n, ...]`, a `Tensor` of `n` samples for each - of the distributions determined by broadcasting the hyperparameters. - - -- - - - -#### `tf.contrib.distributions.Normal.sigma` {#Normal.sigma} - - - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.normal_conjugates_known_sigma_posterior.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.normal_conjugates_known_sigma_posterior.md new file mode 100644 index 0000000000..ae8eb00890 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.normal_conjugates_known_sigma_posterior.md @@ -0,0 +1,48 @@ +### `tf.contrib.distributions.normal_conjugates_known_sigma_posterior(prior, sigma, s, n)` {#normal_conjugates_known_sigma_posterior} + +Posterior Normal distribution with conjugate prior on the mean. + +This model assumes that `n` observations (with sum `s`) come from a +Normal with unknown mean `mu` (described by the Normal `prior`) +and known variance `sigma^2`. The "known sigma posterior" is +the distribution of the unknown `mu`. + +Accepts a prior Normal distribution object, having parameters +`mu0` and `sigma0`, as well as known `sigma` values of the predictive +distribution(s) (also assumed Normal), +and statistical estimates `s` (the sum(s) of the observations) and +`n` (the number(s) of observations). + +Returns a posterior (also Normal) distribution object, with parameters +`(mu', sigma'^2)`, where: + +``` +mu ~ N(mu', sigma'^2) +sigma'^2 = 1/(1/sigma0^2 + n/sigma^2), +mu' = (mu0/sigma0^2 + s/sigma^2) * sigma'^2. +``` + +Distribution parameters from `prior`, as well as `sigma`, `s`, and `n`. +will broadcast in the case of multidimensional sets of parameters. + +##### Args: + + +* `prior`: `Normal` object of type `dtype`: + the prior distribution having parameters `(mu0, sigma0)`. +* `sigma`: tensor of type `dtype`, taking values `sigma > 0`. + The known stddev parameter(s). +* `s`: Tensor of type `dtype`. The sum(s) of observations. +* `n`: Tensor of type `int`. The number(s) of observations. + +##### Returns: + + A new Normal posterior distribution object for the unknown observation + mean `mu`. + +##### Raises: + + +* `TypeError`: if dtype of `s` does not match `dtype`, or `prior` is not a + Normal object. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.ffmpeg.decode_audio.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.ffmpeg.decode_audio.md new file mode 100644 index 0000000000..31b9cba01f --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.ffmpeg.decode_audio.md @@ -0,0 +1,25 @@ +### `tf.contrib.ffmpeg.decode_audio(contents, file_format=None, samples_per_second=None, channel_count=None)` {#decode_audio} + +Create an op that decodes the contents of an audio file. + +##### Args: + + +* `contents`: The binary contents of the audio file to decode. This is a + scalar. +* `file_format`: A string specifying which format the contents will conform + to. This can be mp3, ogg, or wav. +* `samples_per_second`: The number of samples per second that is assumed. + In some cases, resampling will occur to generate the correct sample + rate. +* `channel_count`: The number of channels that should be created from the + audio contents. If the contents have more than this number, then + some channels will be merged or dropped. If contents has fewer than + this, then additional channels will be created from the existing ones. + +##### Returns: + + A rank 2 tensor that has time along dimension 0 and channels along + dimension 1. Dimension 0 will be `samples_per_second * length` wide, and + dimension 1 will be `channel_count` wide. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.learn.ModeKeys.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.learn.ModeKeys.md new file mode 100644 index 0000000000..83e0bd4119 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.learn.ModeKeys.md @@ -0,0 +1,7 @@ +Standard names for model modes. + +The following standard keys are defined: + +* `TRAIN`: training mode. +* `EVAL`: evaluation mode. +* `INFER`: inference mode. diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.learn.extract_pandas_labels.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.learn.extract_pandas_labels.md deleted file mode 100644 index 521a8560e5..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.learn.extract_pandas_labels.md +++ /dev/null @@ -1,4 +0,0 @@ -### `tf.contrib.learn.extract_pandas_labels(labels)` {#extract_pandas_labels} - -Extract data from pandas.DataFrame for labels - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.accuracy.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.accuracy.md deleted file mode 100644 index f41fb78e31..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.accuracy.md +++ /dev/null @@ -1,23 +0,0 @@ -### `tf.contrib.metrics.accuracy(predictions, labels, weights=None)` {#accuracy} - -Computes the percentage of times that predictions matches labels. - -##### Args: - - -* `predictions`: the predicted values, a `Tensor` whose dtype and shape - matches 'labels'. -* `labels`: the ground truth values, a `Tensor` of any shape and - integer or string dtype. -* `weights`: None or `Tensor` of float values to reweight the accuracy. - -##### Returns: - - Accuracy `Tensor`. - -##### Raises: - - -* `ValueError`: if dtypes don't match or - if dtype is not integer or string. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.set_intersection.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.set_intersection.md new file mode 100644 index 0000000000..bd42f3fa01 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.set_intersection.md @@ -0,0 +1,23 @@ +### `tf.contrib.metrics.set_intersection(a, b, validate_indices=True)` {#set_intersection} + +Compute set intersection of elements in last dimension of `a` and `b`. + +All but the last dimension of `a` and `b` must match. + +##### Args: + + +* `a`: `Tensor` or `SparseTensor` of the same type as `b`. If sparse, indices + must be sorted in row-major order. +* `b`: `Tensor` or `SparseTensor` of the same type as `a`. Must be + `SparseTensor` if `a` is `SparseTensor`. If sparse, indices must be + sorted in row-major order. +* `validate_indices`: Whether to validate the order and range of sparse indices + in `a` and `b`. + +##### Returns: + + A `SparseTensor` with the same rank as `a` and `b`, and all but the last + dimension the same. Elements along the last dimension contain the + intersections. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.streaming_mean.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.streaming_mean.md new file mode 100644 index 0000000000..780ecbaa7b --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.streaming_mean.md @@ -0,0 +1,44 @@ +### `tf.contrib.metrics.streaming_mean(values, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_mean} + +Computes the (weighted) mean of the given values. + +The `streaming_mean` function creates two local variables, `total` and `count` +that are used to compute the average of `values`. This average is ultimately +returned as `mean` which is an idempotent operation that simply divides +`total` by `count`. To facilitate the estimation of a mean over a stream +of data, the function creates an `update_op` operation whose behavior is +dependent on the value of `weights`. If `weights` is None, then `update_op` +increments `total` with the reduced sum of `values` and increments `count` +with the number of elements in `values`. If `weights` is not `None`, then +`update_op` increments `total` with the reduced sum of the product of `values` +and `weights` and increments `count` with the reduced sum of weights. +In addition to performing the updates, `update_op` also returns the +`mean`. + +##### Args: + + +* `values`: A `Tensor` of arbitrary dimensions. +* `weights`: An optional set of weights of the same shape as `values`. If + `weights` is not None, the function computes a weighted mean. +* `metrics_collections`: An optional list of collections that `mean` + should be added to. +* `updates_collections`: An optional list of collections that `update_op` + should be added to. +* `name`: An optional variable_op_scope name. + +##### Returns: + + +* `mean`: A tensor representing the current mean, the value of `total` divided + by `count`. +* `update_op`: An operation that increments the `total` and `count` variables + appropriately and whose value matches `mean_value`. + +##### Raises: + + +* `ValueError`: If `weights` is not `None` and its shape doesn't match `values` + or if either `metrics_collections` or `updates_collections` are not a list + or tuple. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.streaming_precision.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.streaming_precision.md new file mode 100644 index 0000000000..77ddaead32 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.streaming_precision.md @@ -0,0 +1,50 @@ +### `tf.contrib.metrics.streaming_precision(predictions, labels, ignore_mask=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_precision} + +Computes the precision of the predictions with respect to the labels. + +The `streaming_precision` function creates two local variables, +`true_positives` and `false_positives`, that are used to compute the +precision. This value is ultimately returned as `precision`, an idempotent +operation that simply divides `true_positives` by the sum of `true_positives` +and `false_positives`. To facilitate the calculation of the precision over a +stream of data, the function creates an `update_op` operation whose behavior +is dependent on the value of `ignore_mask`. If `ignore_mask` is None, then +`update_op` increments `true_positives` with the number of elements of +`predictions` and `labels` that are both `True` and increments +`false_positives` with the number of elements of `predictions` that are `True` +whose corresponding `labels` element is `False`. If `ignore_mask` is not +`None`, then the increments for `true_positives` and `false_positives` are +only computed using elements of `predictions` and `labels` whose corresponding +values in `ignore_mask` are `False`. In addition to performing the updates, +`update_op` also returns the value of `precision`. + +##### Args: + + +* `predictions`: The predicted values, a binary `Tensor` of arbitrary shape. +* `labels`: The ground truth values, a binary `Tensor` whose dimensions must + match `predictions`. +* `ignore_mask`: An optional, binary tensor whose size matches `predictions`. +* `metrics_collections`: An optional list of collections that `precision` should + be added to. +* `updates_collections`: An optional list of collections that `update_op` should + be added to. +* `name`: An optional variable_op_scope name. + +##### Returns: + + +* `precision`: Scalar float `Tensor` with the value of `true_positives` + divided by the sum of `true_positives` and `false_positives`. +* `update_op`: `Operation` that increments `true_positives` and + `false_positives` variables appropriately and whose value matches + `precision`. + +##### Raises: + + +* `ValueError`: If the dimensions of `predictions` and `labels` don't match or + if `ignore_mask` is not `None` and its shape doesn't match `predictions` + or if either `metrics_collections` or `updates_collections` are not a list + or tuple. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.streaming_recall.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.streaming_recall.md new file mode 100644 index 0000000000..26308e2f5f --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.streaming_recall.md @@ -0,0 +1,50 @@ +### `tf.contrib.metrics.streaming_recall(predictions, labels, ignore_mask=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_recall} + +Computes the recall of the predictions with respect to the labels. + +The `streaming_recall` function creates two local variables, +`true_positives` and `false_negatives`, that are used to compute the +recall. This value is ultimately returned as `recall`, an idempotent +operation that simply divides `true_positives` by the sum of `true_positives` +and `false_negatives`. To facilitate the calculation of the recall over a +stream of data, the function creates an `update_op` operation whose behavior +is dependent on the value of `ignore_mask`. If `ignore_mask` is None, then +`update_op` increments `true_positives` with the number of elements of +`predictions` and `labels` that are both `True` and increments +`false_negatives` with the number of elements of `predictions` that are +`False` whose corresponding `labels` element is `False`. If `ignore_mask` is +not `None`, then the increments for `true_positives` and `false_negatives` are +only computed using elements of `predictions` and `labels` whose corresponding +values in `ignore_mask` are `False`. In addition to performing the updates, +`update_op` also returns the value of `recall`. + +##### Args: + + +* `predictions`: The predicted values, a binary `Tensor` of arbitrary shape. +* `labels`: The ground truth values, a binary `Tensor` whose dimensions must + match `predictions`. +* `ignore_mask`: An optional, binary tensor whose size matches `predictions`. +* `metrics_collections`: An optional list of collections that `precision` should + be added to. +* `updates_collections`: An optional list of collections that `update_op` should + be added to. +* `name`: An optional variable_op_scope name. + +##### Returns: + + +* `recall`: Scalar float `Tensor` with the value of `true_positives` divided + by the sum of `true_positives` and `false_negatives`. +* `update_op`: `Operation` that increments `true_positives` and + `false_negatives` variables appropriately and whose value matches + `recall`. + +##### Raises: + + +* `ValueError`: If the dimensions of `predictions` and `labels` don't match or + if `ignore_mask` is not `None` and its shape doesn't match `predictions` + or if either `metrics_collections` or `updates_collections` are not a list + or tuple. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.convert_to_tensor_or_indexed_slices.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.convert_to_tensor_or_indexed_slices.md new file mode 100644 index 0000000000..0c65e8327c --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.convert_to_tensor_or_indexed_slices.md @@ -0,0 +1,27 @@ +### `tf.convert_to_tensor_or_indexed_slices(value, dtype=None, name=None, as_ref=False)` {#convert_to_tensor_or_indexed_slices} + +Converts the given object to a `Tensor` or an `IndexedSlices`. + +If `value` is an `IndexedSlices` or `SparseTensor` it is returned +unmodified. Otherwise, it is converted to a `Tensor` using +`convert_to_tensor()`. + +##### Args: + + +* `value`: An `IndexedSlices`, `SparseTensor`, or an object that can be consumed + by `convert_to_tensor()`. +* `dtype`: (Optional.) The required `DType` of the returned `Tensor` or + `IndexedSlices`. +* `name`: (Optional.) A name to use if a new `Tensor` is created. +* `as_ref`: True if the caller wants the results as ref tensors. + +##### Returns: + + An `Tensor`, `IndexedSlices`, or `SparseTensor` based on `value`. + +##### Raises: + + +* `ValueError`: If `dtype` does not match the element type of `value`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.cos.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.cos.md new file mode 100644 index 0000000000..b4f6f89933 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.cos.md @@ -0,0 +1,14 @@ +### `tf.cos(x, name=None)` {#cos} + +Computes cos of x element-wise. + +##### Args: + + +* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `x`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.count_up_to.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.count_up_to.md new file mode 100644 index 0000000000..da86e52f07 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.count_up_to.md @@ -0,0 +1,23 @@ +### `tf.count_up_to(ref, limit, name=None)` {#count_up_to} + +Increments 'ref' until it reaches 'limit'. + +This operation outputs "ref" after the update is done. This makes it +easier to chain operations that need to use the updated value. + +##### Args: + + +* `ref`: A mutable `Tensor`. Must be one of the following types: `int32`, `int64`. + Should be from a scalar `Variable` node. +* `limit`: An `int`. + If incrementing ref would bring it above limit, instead generates an + 'OutOfRange' error. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `ref`. + A copy of the input before increment. If nothing else modifies the + input, the values produced will all be distinct. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.cross.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.cross.md deleted file mode 100644 index eecf2e869b..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.cross.md +++ /dev/null @@ -1,22 +0,0 @@ -### `tf.cross(a, b, name=None)` {#cross} - -Compute the pairwise cross product. - -`a` and `b` must be the same shape; they can either be simple 3-element vectors, -or any shape where the innermost dimension is 3. In the latter case, each pair -of corresponding 3-element vectors is cross-multiplied independently. - -##### Args: - - -* `a`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`. - A tensor containing 3-element vectors. -* `b`: A `Tensor`. Must have the same type as `a`. - Another tensor, of same type and shape as `a`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `a`. - Pairwise cross product of the vectors in `a` and `b`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.decode_raw.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.decode_raw.md deleted file mode 100644 index 125c15d9a8..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.decode_raw.md +++ /dev/null @@ -1,23 +0,0 @@ -### `tf.decode_raw(bytes, out_type, little_endian=None, name=None)` {#decode_raw} - -Reinterpret the bytes of a string as a vector of numbers. - -##### Args: - - -* `bytes`: A `Tensor` of type `string`. - All the elements must have the same length. -* `out_type`: A `tf.DType` from: `tf.float32, tf.float64, tf.int32, tf.uint8, tf.int16, tf.int8, tf.int64`. -* `little_endian`: An optional `bool`. Defaults to `True`. - Whether the input `bytes` are in little-endian order. - Ignored for `out_type` values that are stored in a single byte like - `uint8`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `out_type`. - A Tensor with one more dimension than the input `bytes`. The - added dimension will have size equal to the length of the elements - of `bytes` divided by the number of bytes to represent `out_type`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.diag.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.diag.md deleted file mode 100644 index 94eb6a6717..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.diag.md +++ /dev/null @@ -1,33 +0,0 @@ -### `tf.diag(diagonal, name=None)` {#diag} - -Returns a diagonal tensor with a given diagonal values. - -Given a `diagonal`, this operation returns a tensor with the `diagonal` and -everything else padded with zeros. The diagonal is computed as follows: - -Assume `diagonal` has dimensions [D1,..., Dk], then the output is a tensor of -rank 2k with dimensions [D1,..., Dk, D1,..., Dk] where: - -`output[i1,..., ik, i1,..., ik] = diagonal[i1, ..., ik]` and 0 everywhere else. - -For example: - -```prettyprint -# 'diagonal' is [1, 2, 3, 4] -tf.diag(diagonal) ==> [[1, 0, 0, 0] - [0, 2, 0, 0] - [0, 0, 3, 0] - [0, 0, 0, 4]] -``` - -##### Args: - - -* `diagonal`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `complex64`. - Rank k tensor where k is at most 3. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `diagonal`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.errors.AbortedError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.errors.AbortedError.md deleted file mode 100644 index f2bc775dcb..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.errors.AbortedError.md +++ /dev/null @@ -1,15 +0,0 @@ -The operation was aborted, typically due to a concurrent action. - -For example, running a -[`queue.enqueue()`](../../api_docs/python/io_ops.md#QueueBase.enqueue) -operation may raise `AbortedError` if a -[`queue.close()`](../../api_docs/python/io_ops.md#QueueBase.close) operation -previously ran. - -- - - - -#### `tf.errors.AbortedError.__init__(node_def, op, message)` {#AbortedError.__init__} - -Creates an `AbortedError`. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.errors.FailedPreconditionError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.errors.FailedPreconditionError.md new file mode 100644 index 0000000000..1cbd338bf9 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.errors.FailedPreconditionError.md @@ -0,0 +1,13 @@ +Operation was rejected because the system is not in a state to execute it. + +This exception is most commonly raised when running an operation +that reads a [`tf.Variable`](../../api_docs/python/state_ops.md#Variable) +before it has been initialized. + +- - - + +#### `tf.errors.FailedPreconditionError.__init__(node_def, op, message)` {#FailedPreconditionError.__init__} + +Creates a `FailedPreconditionError`. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.get_default_graph.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.get_default_graph.md new file mode 100644 index 0000000000..bd734d1b98 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.get_default_graph.md @@ -0,0 +1,17 @@ +### `tf.get_default_graph()` {#get_default_graph} + +Returns the default graph for the current thread. + +The returned graph will be the innermost graph on which a +`Graph.as_default()` context has been entered, or a global default +graph if none has been explicitly created. + +NOTE: The default graph is a property of the current thread. If you +create a new thread, and wish to use the default graph in that +thread, you must explicitly add a `with g.as_default():` in that +thread's function. + +##### Returns: + + The default `Graph` being used in the current thread. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.gradients.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.gradients.md new file mode 100644 index 0000000000..ea710b2a15 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.gradients.md @@ -0,0 +1,48 @@ +### `tf.gradients(ys, xs, grad_ys=None, name='gradients', colocate_gradients_with_ops=False, gate_gradients=False, aggregation_method=None)` {#gradients} + +Constructs symbolic partial derivatives of sum of `ys` w.r.t. x in `xs`. + +`ys` and `xs` are each a `Tensor` or a list of tensors. `grad_ys` +is a list of `Tensor`, holding the gradients received by the +`ys`. The list must be the same length as `ys`. + +`gradients()` adds ops to the graph to output the partial +derivatives of `ys` with respect to `xs`. It returns a list of +`Tensor` of length `len(xs)` where each tensor is the `sum(dy/dx)` +for y in `ys`. + +`grad_ys` is a list of tensors of the same length as `ys` that holds +the initial gradients for each y in `ys`. When `grad_ys` is None, +we fill in a tensor of '1's of the shape of y for each y in `ys`. A +user can provide their own initial `grad_ys` to compute the +derivatives using a different initial gradient for each y (e.g., if +one wanted to weight the gradient differently for each value in +each y). + +##### Args: + + +* `ys`: A `Tensor` or list of tensors to be differentiated. +* `xs`: A `Tensor` or list of tensors to be used for differentiation. +* `grad_ys`: Optional. A `Tensor` or list of tensors the same size as + `ys` and holding the gradients computed for each y in `ys`. +* `name`: Optional name to use for grouping all the gradient ops together. + defaults to 'gradients'. +* `colocate_gradients_with_ops`: If True, try colocating gradients with + the corresponding op. +* `gate_gradients`: If True, add a tuple around the gradients returned + for an operations. This avoids some race conditions. +* `aggregation_method`: Specifies the method used to combine gradient terms. + Accepted values are constants defined in the class `AggregationMethod`. + +##### Returns: + + A list of `sum(dy/dx)` for each x in `xs`. + +##### Raises: + + +* `LookupError`: if one of the operations between `x` and `y` does not + have a registered gradient function. +* `ValueError`: if the arguments are invalid. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.histogram_summary.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.histogram_summary.md new file mode 100644 index 0000000000..1ede11e820 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.histogram_summary.md @@ -0,0 +1,25 @@ +### `tf.histogram_summary(tag, values, collections=None, name=None)` {#histogram_summary} + +Outputs a `Summary` protocol buffer with a histogram. + +The generated +[`Summary`](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto) +has one summary value containing a histogram for `values`. + +This op reports an `InvalidArgument` error if any value is not finite. + +##### Args: + + +* `tag`: A `string` `Tensor`. 0-D. Tag to use for the summary value. +* `values`: A real numeric `Tensor`. Any shape. Values to use to + build the histogram. +* `collections`: Optional list of graph collections keys. The new summary op is + added to these collections. Defaults to `[GraphKeys.SUMMARIES]`. +* `name`: A name for the operation (optional). + +##### Returns: + + A scalar `Tensor` of type `string`. The serialized `Summary` protocol + buffer. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.adjust_contrast.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.adjust_contrast.md new file mode 100644 index 0000000000..2fbf1b3e2a --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.adjust_contrast.md @@ -0,0 +1,29 @@ +### `tf.image.adjust_contrast(images, contrast_factor)` {#adjust_contrast} + +Adjust contrast of RGB or grayscale images. + +This is a convenience method that converts an RGB image to float +representation, adjusts its contrast, and then converts it back to the +original data type. If several adjustments are chained it is advisable to +minimize the number of redundant conversions. + +`images` is a tensor of at least 3 dimensions. The last 3 dimensions are +interpreted as `[height, width, channels]`. The other dimensions only +represent a collection of images, such as `[batch, height, width, channels].` + +Contrast is adjusted independently for each channel of each image. + +For each channel, this Op computes the mean of the image pixels in the +channel and then adjusts each component `x` of each pixel to +`(x - mean) * contrast_factor + mean`. + +##### Args: + + +* `images`: Images to adjust. At least 3-D. +* `contrast_factor`: A float multiplier for adjusting contrast. + +##### Returns: + + The contrast-adjusted image or images. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.adjust_hue.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.adjust_hue.md deleted file mode 100644 index e334e26184..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.adjust_hue.md +++ /dev/null @@ -1,26 +0,0 @@ -### `tf.image.adjust_hue(image, delta, name=None)` {#adjust_hue} - -Adjust hue of an RGB image. - -This is a convenience method that converts an RGB image to float -representation, converts it to HSV, add an offset to the hue channel, converts -back to RGB and then back to the original data type. If several adjustments -are chained it is advisable to minimize the number of redundant conversions. - -`image` is an RGB image. The image hue is adjusted by converting the -image to HSV and rotating the hue channel (H) by -`delta`. The image is then converted back to RGB. - -`delta` must be in the interval `[-1, 1]`. - -##### Args: - - -* `image`: RGB image or images. Size of the last dimension must be 3. -* `delta`: float. How much to add to the hue channel. -* `name`: A name for this operation (optional). - -##### Returns: - - Adjusted image(s), same shape and DType as `image`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.central_crop.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.central_crop.md deleted file mode 100644 index 4e6b6115f8..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.central_crop.md +++ /dev/null @@ -1,30 +0,0 @@ -### `tf.image.central_crop(image, central_fraction)` {#central_crop} - -Crop the central region of the image. - -Remove the outer parts of an image but retain the central region of the image -along each dimension. If we specify central_fraction = 0.5, this function -returns the region marked with "X" in the below diagram. - - -------- - | | - | XXXX | - | XXXX | - | | where "X" is the central 50% of the image. - -------- - -##### Args: - - -* `image`: 3-D float Tensor of shape [height, width, depth] -* `central_fraction`: float (0, 1], fraction of size to crop - -##### Raises: - - -* `ValueError`: if central_crop_fraction is not within (0, 1]. - -##### Returns: - - 3-D float Tensor - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.crop_to_bounding_box.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.crop_to_bounding_box.md new file mode 100644 index 0000000000..4724ff5eb9 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.crop_to_bounding_box.md @@ -0,0 +1,30 @@ +### `tf.image.crop_to_bounding_box(image, offset_height, offset_width, target_height, target_width)` {#crop_to_bounding_box} + +Crops an image to a specified bounding box. + +This op cuts a rectangular part out of `image`. The top-left corner of the +returned image is at `offset_height, offset_width` in `image`, and its +lower-right corner is at +`offset_height + target_height, offset_width + target_width`. + +##### Args: + + +* `image`: 3-D tensor with shape `[height, width, channels]` +* `offset_height`: Vertical coordinate of the top-left corner of the result in + the input. +* `offset_width`: Horizontal coordinate of the top-left corner of the result in + the input. +* `target_height`: Height of the result. +* `target_width`: Width of the result. + +##### Returns: + + 3-D tensor of image with shape `[target_height, target_width, channels]` + +##### Raises: + + +* `ValueError`: If the shape of `image` is incompatible with the `offset_*` or + `target_*` arguments + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.decode_jpeg.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.decode_jpeg.md deleted file mode 100644 index f4c6f1340a..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.decode_jpeg.md +++ /dev/null @@ -1,41 +0,0 @@ -### `tf.image.decode_jpeg(contents, channels=None, ratio=None, fancy_upscaling=None, try_recover_truncated=None, acceptable_fraction=None, name=None)` {#decode_jpeg} - -Decode a JPEG-encoded image to a uint8 tensor. - -The attr `channels` indicates the desired number of color channels for the -decoded image. - -Accepted values are: - -* 0: Use the number of channels in the JPEG-encoded image. -* 1: output a grayscale image. -* 3: output an RGB image. - -If needed, the JPEG-encoded image is transformed to match the requested number -of color channels. - -The attr `ratio` allows downscaling the image by an integer factor during -decoding. Allowed values are: 1, 2, 4, and 8. This is much faster than -downscaling the image later. - -##### Args: - - -* `contents`: A `Tensor` of type `string`. 0-D. The JPEG-encoded image. -* `channels`: An optional `int`. Defaults to `0`. - Number of color channels for the decoded image. -* `ratio`: An optional `int`. Defaults to `1`. Downscaling ratio. -* `fancy_upscaling`: An optional `bool`. Defaults to `True`. - If true use a slower but nicer upscaling of the - chroma planes (yuv420/422 only). -* `try_recover_truncated`: An optional `bool`. Defaults to `False`. - If true try to recover an image from truncated input. -* `acceptable_fraction`: An optional `float`. Defaults to `1`. - The minimum required fraction of lines before a truncated - input is accepted. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `uint8`. 3-D with shape `[height, width, channels]`.. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.encode_png.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.encode_png.md deleted file mode 100644 index fa073a771f..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.encode_png.md +++ /dev/null @@ -1,28 +0,0 @@ -### `tf.image.encode_png(image, compression=None, name=None)` {#encode_png} - -PNG-encode an image. - -`image` is a 3-D uint8 or uint16 Tensor of shape `[height, width, channels]` -where `channels` is: - -* 1: for grayscale. -* 2: for grayscale + alpha. -* 3: for RGB. -* 4: for RGBA. - -The ZLIB compression level, `compression`, can be -1 for the PNG-encoder -default or a value from 0 to 9. 9 is the highest compression level, generating -the smallest output, but is slower. - -##### Args: - - -* `image`: A `Tensor`. Must be one of the following types: `uint8`, `uint16`. - 3-D with shape `[height, width, channels]`. -* `compression`: An optional `int`. Defaults to `-1`. Compression level. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `string`. 0-D. PNG-encoded image. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.hsv_to_rgb.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.hsv_to_rgb.md new file mode 100644 index 0000000000..3193dd9c60 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.hsv_to_rgb.md @@ -0,0 +1,21 @@ +### `tf.image.hsv_to_rgb(images, name=None)` {#hsv_to_rgb} + +Convert one or more images from HSV to RGB. + +Outputs a tensor of the same shape as the `images` tensor, containing the RGB +value of the pixels. The output is only well defined if the value in `images` +are in `[0,1]`. + +See `rgb_to_hsv` for a description of the HSV encoding. + +##### Args: + + +* `images`: A `Tensor` of type `float32`. + 1-D or higher rank. HSV data to convert. Last dimension must be size 3. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `float32`. `images` converted to RGB. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.resize_bicubic.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.resize_bicubic.md deleted file mode 100644 index 1805c7423d..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.resize_bicubic.md +++ /dev/null @@ -1,24 +0,0 @@ -### `tf.image.resize_bicubic(images, size, align_corners=None, name=None)` {#resize_bicubic} - -Resize `images` to `size` using bicubic interpolation. - -Input images can be of different types but output images are always float. - -##### Args: - - -* `images`: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int16`, `int32`, `int64`, `half`, `float32`, `float64`. - 4-D with shape `[batch, height, width, channels]`. -* `size`: A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The - new size for the images. -* `align_corners`: An optional `bool`. Defaults to `False`. - If true, rescale input by (new_height - 1) / (height - 1), which - exactly aligns the 4 corners of images and resized images. If false, rescale - by new_height / height. Treat similarly the width dimension. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `float32`. 4-D with shape - `[batch, new_height, new_width, channels]`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.resize_bilinear.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.resize_bilinear.md deleted file mode 100644 index a9580ca199..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.resize_bilinear.md +++ /dev/null @@ -1,24 +0,0 @@ -### `tf.image.resize_bilinear(images, size, align_corners=None, name=None)` {#resize_bilinear} - -Resize `images` to `size` using bilinear interpolation. - -Input images can be of different types but output images are always float. - -##### Args: - - -* `images`: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int16`, `int32`, `int64`, `half`, `float32`, `float64`. - 4-D with shape `[batch, height, width, channels]`. -* `size`: A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The - new size for the images. -* `align_corners`: An optional `bool`. Defaults to `False`. - If true, rescale input by (new_height - 1) / (height - 1), which - exactly aligns the 4 corners of images and resized images. If false, rescale - by new_height / height. Treat similarly the width dimension. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `float32`. 4-D with shape - `[batch, new_height, new_width, channels]`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.sample_distorted_bounding_box.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.sample_distorted_bounding_box.md new file mode 100644 index 0000000000..2831492f54 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.sample_distorted_bounding_box.md @@ -0,0 +1,85 @@ +### `tf.image.sample_distorted_bounding_box(image_size, bounding_boxes, seed=None, seed2=None, min_object_covered=None, aspect_ratio_range=None, area_range=None, max_attempts=None, use_image_if_no_bounding_boxes=None, name=None)` {#sample_distorted_bounding_box} + +Generate a single randomly distorted bounding box for an image. + +Bounding box annotations are often supplied in addition to ground-truth labels +in image recognition or object localization tasks. A common technique for +training such a system is to randomly distort an image while preserving +its content, i.e. *data augmentation*. This Op outputs a randomly distorted +localization of an object, i.e. bounding box, given an `image_size`, +`bounding_boxes` and a series of constraints. + +The output of this Op is a single bounding box that may be used to crop the +original image. The output is returned as 3 tensors: `begin`, `size` and +`bboxes`. The first 2 tensors can be fed directly into `tf.slice` to crop the +image. The latter may be supplied to `tf.image.draw_bounding_box` to visualize +what the bounding box looks like. + +Bounding boxes are supplied and returned as `[y_min, x_min, y_max, x_max]`. The +bounding box coordinates are floats in `[0.0, 1.0]` relative to the width and +height of the underlying image. + +For example, + + # Generate a single distorted bounding box. + begin, size, bbox_for_draw = tf.image.sample_distorted_bounding_box( + tf.shape(image), + bounding_boxes=bounding_boxes) + + # Draw the bounding box in an image summary. + image_with_box = tf.image.draw_bounding_boxes(tf.expand_dims(image, 0), + bbox_for_draw) + tf.image_summary('images_with_box', image_with_box) + + # Employ the bounding box to distort the image. + distorted_image = tf.slice(image, begin, size) + +Note that if no bounding box information is available, setting +`use_image_if_no_bounding_boxes = true` will assume there is a single implicit +bounding box covering the whole image. If `use_image_if_no_bounding_boxes` is +false and no bounding boxes are supplied, an error is raised. + +##### Args: + + +* `image_size`: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int16`, `int32`, `int64`. + 1-D, containing `[height, width, channels]`. +* `bounding_boxes`: A `Tensor` of type `float32`. + 3-D with shape `[batch, N, 4]` describing the N bounding boxes + associated with the image. +* `seed`: An optional `int`. Defaults to `0`. + If either `seed` or `seed2` are set to non-zero, the random number + generator is seeded by the given `seed`. Otherwise, it is seeded by a random + seed. +* `seed2`: An optional `int`. Defaults to `0`. + A second seed to avoid seed collision. +* `min_object_covered`: An optional `float`. Defaults to `0.1`. + The cropped area of the image must contain at least this + fraction of any bounding box supplied. +* `aspect_ratio_range`: An optional list of `floats`. Defaults to `[0.75, 1.33]`. + The cropped area of the image must have an aspect ratio = + width / height within this range. +* `area_range`: An optional list of `floats`. Defaults to `[0.05, 1]`. + The cropped area of the image must contain a fraction of the + supplied image within in this range. +* `max_attempts`: An optional `int`. Defaults to `100`. + Number of attempts at generating a cropped region of the image + of the specified constraints. After `max_attempts` failures, return the entire + image. +* `use_image_if_no_bounding_boxes`: An optional `bool`. Defaults to `False`. + Controls behavior if no bounding boxes supplied. + If true, assume an implicit bounding box covering the whole input. If false, + raise an error. +* `name`: A name for the operation (optional). + +##### Returns: + + A tuple of `Tensor` objects (begin, size, bboxes). + +* `begin`: A `Tensor`. Has the same type as `image_size`. 1-D, containing `[offset_height, offset_width, 0]`. Provide as input to + `tf.slice`. +* `size`: A `Tensor`. Has the same type as `image_size`. 1-D, containing `[target_height, target_width, -1]`. Provide as input to + `tf.slice`. +* `bboxes`: A `Tensor` of type `float32`. 3-D with shape `[1, 1, 4]` containing the distorted bounding box. + Provide as input to `tf.image.draw_bounding_boxes`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image_summary.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image_summary.md deleted file mode 100644 index 5df729544b..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image_summary.md +++ /dev/null @@ -1,48 +0,0 @@ -### `tf.image_summary(tag, tensor, max_images=3, collections=None, name=None)` {#image_summary} - -Outputs a `Summary` protocol buffer with images. - -The summary has up to `max_images` summary values containing images. The -images are built from `tensor` which must be 4-D with shape `[batch_size, -height, width, channels]` and where `channels` can be: - -* 1: `tensor` is interpreted as Grayscale. -* 3: `tensor` is interpreted as RGB. -* 4: `tensor` is interpreted as RGBA. - -The images have the same number of channels as the input tensor. For float -input, the values are normalized one image at a time to fit in the range -`[0, 255]`. `uint8` values are unchanged. The op uses two different -normalization algorithms: - -* If the input values are all positive, they are rescaled so the largest one - is 255. - -* If any input value is negative, the values are shifted so input value 0.0 - is at 127. They are then rescaled so that either the smallest value is 0, - or the largest one is 255. - -The `tag` argument is a scalar `Tensor` of type `string`. It is used to -build the `tag` of the summary values: - -* If `max_images` is 1, the summary value tag is '*tag*/image'. -* If `max_images` is greater than 1, the summary value tags are - generated sequentially as '*tag*/image/0', '*tag*/image/1', etc. - -##### Args: - - -* `tag`: A scalar `Tensor` of type `string`. Used to build the `tag` - of the summary values. -* `tensor`: A 4-D `uint8` or `float32` `Tensor` of shape `[batch_size, height, - width, channels]` where `channels` is 1, 3, or 4. -* `max_images`: Max number of batch elements to generate images for. -* `collections`: Optional list of ops.GraphKeys. The collections to add the - summary to. Defaults to [ops.GraphKeys.SUMMARIES] -* `name`: A name for the operation (optional). - -##### Returns: - - A scalar `Tensor` of type `string`. The serialized `Summary` protocol - buffer. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.initialize_variables.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.initialize_variables.md new file mode 100644 index 0000000000..8941ab4853 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.initialize_variables.md @@ -0,0 +1,24 @@ +### `tf.initialize_variables(var_list, name='init')` {#initialize_variables} + +Returns an Op that initializes a list of variables. + +After you launch the graph in a session, you can run the returned Op to +initialize all the variables in `var_list`. This Op runs all the +initializers of the variables in `var_list` in parallel. + +Calling `initialize_variables()` is equivalent to passing the list of +initializers to `Group()`. + +If `var_list` is empty, however, the function still returns an Op that can +be run. That Op just has no effect. + +##### Args: + + +* `var_list`: List of `Variable` objects to initialize. +* `name`: Optional name for the returned operation. + +##### Returns: + + An Op that run the initializers of all the specified variables. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.less.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.less.md deleted file mode 100644 index 8791d0366a..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.less.md +++ /dev/null @@ -1,15 +0,0 @@ -### `tf.less(x, y, name=None)` {#less} - -Returns the truth value of (x < y) element-wise. - -##### Args: - - -* `x`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`. -* `y`: A `Tensor`. Must have the same type as `x`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `bool`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.less_equal.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.less_equal.md new file mode 100644 index 0000000000..65d7eb5084 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.less_equal.md @@ -0,0 +1,15 @@ +### `tf.less_equal(x, y, name=None)` {#less_equal} + +Returns the truth value of (x <= y) element-wise. + +##### Args: + + +* `x`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`. +* `y`: A `Tensor`. Must have the same type as `x`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `bool`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.load_file_system_library.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.load_file_system_library.md deleted file mode 100644 index 60d768a624..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.load_file_system_library.md +++ /dev/null @@ -1,23 +0,0 @@ -### `tf.load_file_system_library(library_filename)` {#load_file_system_library} - -Loads a TensorFlow plugin, containing file system implementation. - -Pass `library_filename` to a platform-specific mechanism for dynamically -loading a library. The rules for determining the exact location of the -library are platform-specific and are not documented here. - -##### Args: - - -* `library_filename`: Path to the plugin. - Relative or absolute filesystem path to a dynamic library file. - -##### Returns: - - None. - -##### Raises: - - -* `RuntimeError`: when unable to load the library. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.logical_not.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.logical_not.md new file mode 100644 index 0000000000..40a0bb2e43 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.logical_not.md @@ -0,0 +1,14 @@ +### `tf.logical_not(x, name=None)` {#logical_not} + +Returns the truth value of NOT x element-wise. + +##### Args: + + +* `x`: A `Tensor` of type `bool`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `bool`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.logical_xor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.logical_xor.md new file mode 100644 index 0000000000..20db3e60a6 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.logical_xor.md @@ -0,0 +1,4 @@ +### `tf.logical_xor(x, y, name='LogicalXor')` {#logical_xor} + +x ^ y = (x | y) & ~(x & y). + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.make_template.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.make_template.md new file mode 100644 index 0000000000..bb0cff57cd --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.make_template.md @@ -0,0 +1,105 @@ +### `tf.make_template(name_, func_, create_scope_now_=False, **kwargs)` {#make_template} + +Given an arbitrary function, wrap it so that it does variable sharing. + +This wraps `func_` in a Template and partially evaluates it. Templates are +functions that create variables the first time they are called and reuse them +thereafter. In order for `func_` to be compatible with a `Template` it must +have the following properties: + +* The function should create all trainable variables and any variables that + should be reused by calling `tf.get_variable`. If a trainable variable is + created using `tf.Variable`, then a ValueError will be thrown. Variables + that are intended to be locals can be created by specifying + `tf.Variable(..., trainable=false)`. +* The function may use variable scopes and other templates internally to + create and reuse variables, but it shouldn't use `tf.get_variables` to + capture variables that are defined outside of the scope of the function. +* Internal scopes and variable names should not depend on any arguments that + are not supplied to `make_template`. In general you will get a ValueError + telling you that you are trying to reuse a variable that doesn't exist + if you make a mistake. + +In the following example, both `z` and `w` will be scaled by the same `y`. It +is important to note that if we didn't assign `scalar_name` and used a +different name for z and w that a `ValueError` would be thrown because it +couldn't reuse the variable. + +```python +def my_op(x, scalar_name): + var1 = tf.get_variable(scalar_name, + shape=[], + initializer=tf.constant_initializer(1)) + return x * var1 + +scale_by_y = tf.make_template('scale_by_y', my_op, scalar_name='y') + +z = scale_by_y(input1) +w = scale_by_y(input2) +``` + +As a safe-guard, the returned function will raise a `ValueError` after the +first call if trainable variables are created by calling `tf.Variable`. + +If all of these are true, then 2 properties are enforced by the template: + +1. Calling the same template multiple times will share all non-local + variables. +2. Two different templates are guaranteed to be unique, unless you reenter the + same variable scope as the initial definition of a template and redefine + it. An examples of this exception: + +```python +def my_op(x, scalar_name): + var1 = tf.get_variable(scalar_name, + shape=[], + initializer=tf.constant_initializer(1)) + return x * var1 + +with tf.variable_scope('scope') as vs: + scale_by_y = tf.make_template('scale_by_y', my_op, scalar_name='y') + z = scale_by_y(input1) + w = scale_by_y(input2) + +# Creates a template that reuses the variables above. +with tf.variable_scope(vs, reuse=True): + scale_by_y2 = tf.make_template('scale_by_y', my_op, scalar_name='y') + z2 = scale_by_y2(input1) + w2 = scale_by_y2(input2) +``` + +Depending on the value of `create_scope_now_`, the full variable scope may be +captured either at the time of first call or at the time of construction. If +this option is set to True, then all Tensors created by repeated calls to the +template will have an extra trailing _N+1 to their name, as the first time the +scope is entered in the Template constructor no Tensors are created. + +Note: `name_`, `func_` and `create_scope_now_` have a trailing underscore to +reduce the likelihood of collisions with kwargs. + +##### Args: + + +* `name_`: A name for the scope created by this template. If necessary, the name + will be made unique by appending `_N` to the name. +* `func_`: The function to wrap. +* `create_scope_now_`: Boolean controlling whether the scope should be created + when the template is constructed or when the template is called. Default + is False, meaning the scope is created when the template is called. +* `**kwargs`: Keyword arguments to apply to `func_`. + +##### Returns: + + A function to encapsulate a set of variables which should be created once + and reused. An enclosing scope will created, either where `make_template` + is called, or wherever the result is called, depending on the value of + `create_scope_now_`. Regardless of the value, the first time the template + is called it will enter the scope with no reuse, and call `func_` to create + variables, which are guaranteed to be unique. All subsequent calls will + re-enter the scope and reuse those variables. + +##### Raises: + + +* `ValueError`: if the name is None. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.minimum.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.minimum.md deleted file mode 100644 index bff13483f4..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.minimum.md +++ /dev/null @@ -1,15 +0,0 @@ -### `tf.minimum(x, y, name=None)` {#minimum} - -Returns the min of x and y (i.e. x < y ? x : y) element-wise, broadcasts. - -##### Args: - - -* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`. -* `y`: A `Tensor`. Must have the same type as `x`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `x`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.multinomial.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.multinomial.md new file mode 100644 index 0000000000..b5bf7a30a5 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.multinomial.md @@ -0,0 +1,28 @@ +### `tf.multinomial(logits, num_samples, seed=None, name=None)` {#multinomial} + +Draws samples from a multinomial distribution. + +Example: + + samples = tf.multinomial(tf.log([[0.5, 0.5]]), 10) + # samples has shape [1, 10], where each value is either 0 or 1. + + samples = tf.multinomial([[1, -1, -1]], 10) + # samples is equivalent to tf.zeros([1, 10], dtype=tf.int64). + +##### Args: + + +* `logits`: 2-D Tensor with shape `[batch_size, num_classes]`. Each slice + `[i, :]` represents the unnormalized log probabilities for all classes. +* `num_samples`: 0-D. Number of independent samples to draw for each row slice. +* `seed`: A Python integer. Used to create a random seed for the distribution. + See + [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) + for behavior. +* `name`: Optional name for the operation. + +##### Returns: + + The drawn samples of shape `[batch_size, num_samples]`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.name_scope.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.name_scope.md new file mode 100644 index 0000000000..a003f2327f --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.name_scope.md @@ -0,0 +1,18 @@ +### `tf.name_scope(name)` {#name_scope} + +Wrapper for `Graph.name_scope()` using the default graph. + +See +[`Graph.name_scope()`](../../api_docs/python/framework.md#Graph.name_scope) +for more details. + +##### Args: + + +* `name`: A name for the scope. + +##### Returns: + + A context manager that installs `name` as a new name scope in the + default graph. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.avg_pool.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.avg_pool.md new file mode 100644 index 0000000000..33da8534c2 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.avg_pool.md @@ -0,0 +1,25 @@ +### `tf.nn.avg_pool(value, ksize, strides, padding, data_format='NHWC', name=None)` {#avg_pool} + +Performs the average pooling on the input. + +Each entry in `output` is the mean of the corresponding size `ksize` +window in `value`. + +##### Args: + + +* `value`: A 4-D `Tensor` of shape `[batch, height, width, channels]` and type + `float32`, `float64`, `qint8`, `quint8`, or `qint32`. +* `ksize`: A list of ints that has length >= 4. + The size of the window for each dimension of the input tensor. +* `strides`: A list of ints that has length >= 4. + The stride of the sliding window for each dimension of the + input tensor. +* `padding`: A string, either `'VALID'` or `'SAME'`. The padding algorithm. +* `data_format`: A string. 'NHWC' and 'NCHW' are supported. +* `name`: Optional name for the operation. + +##### Returns: + + A `Tensor` with the same type as `value`. The average pooled output tensor. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.bias_add.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.bias_add.md new file mode 100644 index 0000000000..1eea161f23 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.bias_add.md @@ -0,0 +1,24 @@ +### `tf.nn.bias_add(value, bias, data_format=None, name=None)` {#bias_add} + +Adds `bias` to `value`. + +This is (mostly) a special case of `tf.add` where `bias` is restricted to 1-D. +Broadcasting is supported, so `value` may have any number of dimensions. +Unlike `tf.add`, the type of `bias` is allowed to differ from `value` in the +case where both types are quantized. + +##### Args: + + +* `value`: A `Tensor` with type `float`, `double`, `int64`, `int32`, `uint8`, + `int16`, `int8`, or `complex64`. +* `bias`: A 1-D `Tensor` with size matching the last dimension of `value`. + Must be the same type as `value` unless `value` is a quantized type, + in which case a different quantized type may be used. +* `data_format`: A string. 'NHWC' and 'NCHW' are supported. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` with the same type as `value`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.compute_accidental_hits.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.compute_accidental_hits.md deleted file mode 100644 index 9d5bb30303..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.compute_accidental_hits.md +++ /dev/null @@ -1,45 +0,0 @@ -### `tf.nn.compute_accidental_hits(true_classes, sampled_candidates, num_true, seed=None, name=None)` {#compute_accidental_hits} - -Compute the position ids in `sampled_candidates` matching `true_classes`. - -In Candidate Sampling, this operation facilitates virtually removing -sampled classes which happen to match target classes. This is done -in Sampled Softmax and Sampled Logistic. - -See our [Candidate Sampling Algorithms -Reference](http://www.tensorflow.org/extras/candidate_sampling.pdf). - -We presuppose that the `sampled_candidates` are unique. - -We call it an 'accidental hit' when one of the target classes -matches one of the sampled classes. This operation reports -accidental hits as triples `(index, id, weight)`, where `index` -represents the row number in `true_classes`, `id` represents the -position in `sampled_candidates`, and weight is `-FLOAT_MAX`. - -The result of this op should be passed through a `sparse_to_dense` -operation, then added to the logits of the sampled classes. This -removes the contradictory effect of accidentally sampling the true -target classes as noise classes for the same example. - -##### Args: - - -* `true_classes`: A `Tensor` of type `int64` and shape `[batch_size, - num_true]`. The target classes. -* `sampled_candidates`: A tensor of type `int64` and shape `[num_sampled]`. - The sampled_candidates output of CandidateSampler. -* `num_true`: An `int`. The number of target classes per training example. -* `seed`: An `int`. An operation-specific seed. Default is 0. -* `name`: A name for the operation (optional). - -##### Returns: - - -* `indices`: A `Tensor` of type `int32` and shape `[num_accidental_hits]`. - Values indicate rows in `true_classes`. -* `ids`: A `Tensor` of type `int64` and shape `[num_accidental_hits]`. - Values indicate positions in `sampled_candidates`. -* `weights`: A `Tensor` of type `float` and shape `[num_accidental_hits]`. - Each value is `-FLOAT_MAX`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.depthwise_conv2d_native.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.depthwise_conv2d_native.md new file mode 100644 index 0000000000..c2736f1ba9 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.depthwise_conv2d_native.md @@ -0,0 +1,37 @@ +### `tf.nn.depthwise_conv2d_native(input, filter, strides, padding, name=None)` {#depthwise_conv2d_native} + +Computes a 2-D depthwise convolution given 4-D `input` and `filter` tensors. + +Given an input tensor of shape `[batch, in_height, in_width, in_channels]` +and a filter / kernel tensor of shape +`[filter_height, filter_width, in_channels, channel_multiplier]`, containing +`in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies +a different filter to each input channel (expanding from 1 channel to +`channel_multiplier` channels for each), then concatenates the results +together. Thus, the output has `in_channels * channel_multiplier` channels. + +for k in 0..in_channels-1 + for q in 0..channel_multiplier-1 + output[b, i, j, k * channel_multiplier + q] = + sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] * + filter[di, dj, k, q] + +Must have `strides[0] = strides[3] = 1`. For the most common case of the same +horizontal and vertices strides, `strides = [1, stride, stride, 1]`. + +##### Args: + + +* `input`: A `Tensor`. Must be one of the following types: `float32`, `float64`. +* `filter`: A `Tensor`. Must have the same type as `input`. +* `strides`: A list of `ints`. + 1-D of length 4. The stride of the sliding window for each dimension + of `input`. +* `padding`: A `string` from: `"SAME", "VALID"`. + The type of padding algorithm to use. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `input`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.elu.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.elu.md new file mode 100644 index 0000000000..cef8dedb50 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.elu.md @@ -0,0 +1,17 @@ +### `tf.nn.elu(features, name=None)` {#elu} + +Computes exponential linear: `exp(features) - 1` if < 0, `features` otherwise. + +See [Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) +](http://arxiv.org/abs/1511.07289) + +##### Args: + + +* `features`: A `Tensor`. Must be one of the following types: `float32`, `float64`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `features`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.fixed_unigram_candidate_sampler.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.fixed_unigram_candidate_sampler.md deleted file mode 100644 index ad9b059e42..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.fixed_unigram_candidate_sampler.md +++ /dev/null @@ -1,75 +0,0 @@ -### `tf.nn.fixed_unigram_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, vocab_file='', distortion=1.0, num_reserved_ids=0, num_shards=1, shard=0, unigrams=(), seed=None, name=None)` {#fixed_unigram_candidate_sampler} - -Samples a set of classes using the provided (fixed) base distribution. - -This operation randomly samples a tensor of sampled classes -(`sampled_candidates`) from the range of integers `[0, range_max)`. - -The elements of `sampled_candidates` are drawn without replacement -(if `unique=True`) or with replacement (if `unique=False`) from -the base distribution. - -The base distribution is read from a file or passed in as an -in-memory array. There is also an option to skew the distribution by -applying a distortion power to the weights. - -In addition, this operation returns tensors `true_expected_count` -and `sampled_expected_count` representing the number of times each -of the target classes (`true_classes`) and the sampled -classes (`sampled_candidates`) is expected to occur in an average -tensor of sampled classes. These values correspond to `Q(y|x)` -defined in [this -document](http://www.tensorflow.org/extras/candidate_sampling.pdf). -If `unique=True`, then these are post-rejection probabilities and we -compute them approximately. - -##### Args: - - -* `true_classes`: A `Tensor` of type `int64` and shape `[batch_size, - num_true]`. The target classes. -* `num_true`: An `int`. The number of target classes per training example. -* `num_sampled`: An `int`. The number of classes to randomly sample per batch. -* `unique`: A `bool`. Determines whether all sampled classes in a batch are - unique. -* `range_max`: An `int`. The number of possible classes. -* `vocab_file`: Each valid line in this file (which should have a CSV-like - format) corresponds to a valid word ID. IDs are in sequential order, - starting from num_reserved_ids. The last entry in each line is expected - to be a value corresponding to the count or relative probability. Exactly - one of `vocab_file` and `unigrams` needs to be passed to this operation. -* `distortion`: The distortion is used to skew the unigram probability - distribution. Each weight is first raised to the distortion's power - before adding to the internal unigram distribution. As a result, - `distortion = 1.0` gives regular unigram sampling (as defined by the vocab - file), and `distortion = 0.0` gives a uniform distribution. -* `num_reserved_ids`: Optionally some reserved IDs can be added in the range - `[0, num_reserved_ids]` by the users. One use case is that a special - unknown word token is used as ID 0. These IDs will have a sampling - probability of 0. -* `num_shards`: A sampler can be used to sample from a subset of the original - range in order to speed up the whole computation through parallelism. This - parameter (together with `shard`) indicates the number of partitions that - are being used in the overall computation. -* `shard`: A sampler can be used to sample from a subset of the original range - in order to speed up the whole computation through parallelism. This - parameter (together with `num_shards`) indicates the particular partition - number of the operation, when partitioning is being used. -* `unigrams`: A list of unigram counts or probabilities, one per ID in - sequential order. Exactly one of `vocab_file` and `unigrams` should be - passed to this operation. -* `seed`: An `int`. An operation-specific seed. Default is 0. -* `name`: A name for the operation (optional). - -##### Returns: - - -* `sampled_candidates`: A tensor of type `int64` and shape `[num_sampled]`. - The sampled classes. -* `true_expected_count`: A tensor of type `float`. Same shape as - `true_classes`. The expected counts under the sampling distribution - of each of `true_classes`. -* `sampled_expected_count`: A tensor of type `float`. Same shape as - `sampled_candidates`. The expected counts under the sampling distribution - of each of `sampled_candidates`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.local_response_normalization.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.local_response_normalization.md new file mode 100644 index 0000000000..349e34fa73 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.local_response_normalization.md @@ -0,0 +1,34 @@ +### `tf.nn.local_response_normalization(input, depth_radius=None, bias=None, alpha=None, beta=None, name=None)` {#local_response_normalization} + +Local Response Normalization. + +The 4-D `input` tensor is treated as a 3-D array of 1-D vectors (along the last +dimension), and each vector is normalized independently. Within a given vector, +each component is divided by the weighted, squared sum of inputs within +`depth_radius`. In detail, + + sqr_sum[a, b, c, d] = + sum(input[a, b, c, d - depth_radius : d + depth_radius + 1] ** 2) + output = input / (bias + alpha * sqr_sum) ** beta + +For details, see [Krizhevsky et al., ImageNet classification with deep +convolutional neural networks (NIPS 2012)] +(http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks). + +##### Args: + + +* `input`: A `Tensor` of type `float32`. 4-D. +* `depth_radius`: An optional `int`. Defaults to `5`. + 0-D. Half-width of the 1-D normalization window. +* `bias`: An optional `float`. Defaults to `1`. + An offset (usually positive to avoid dividing by 0). +* `alpha`: An optional `float`. Defaults to `1`. + A scale factor, usually positive. +* `beta`: An optional `float`. Defaults to `0.5`. An exponent. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `float32`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.separable_conv2d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.separable_conv2d.md deleted file mode 100644 index f4be03303f..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.separable_conv2d.md +++ /dev/null @@ -1,40 +0,0 @@ -### `tf.nn.separable_conv2d(input, depthwise_filter, pointwise_filter, strides, padding, name=None)` {#separable_conv2d} - -2-D convolution with separable filters. - -Performs a depthwise convolution that acts separately on channels followed by -a pointwise convolution that mixes channels. Note that this is separability -between dimensions `[1, 2]` and `3`, not spatial separability between -dimensions `1` and `2`. - -In detail, - - output[b, i, j, k] = sum_{di, dj, q, r] - input[b, strides[1] * i + di, strides[2] * j + dj, q] * - depthwise_filter[di, dj, q, r] * - pointwise_filter[0, 0, q * channel_multiplier + r, k] - -`strides` controls the strides for the depthwise convolution only, since -the pointwise convolution has implicit strides of `[1, 1, 1, 1]`. Must have -`strides[0] = strides[3] = 1`. For the most common case of the same -horizontal and vertical strides, `strides = [1, stride, stride, 1]`. - -##### Args: - - -* `input`: 4-D `Tensor` with shape `[batch, in_height, in_width, in_channels]`. -* `depthwise_filter`: 4-D `Tensor` with shape - `[filter_height, filter_width, in_channels, channel_multiplier]`. - Contains `in_channels` convolutional filters of depth 1. -* `pointwise_filter`: 4-D `Tensor` with shape - `[1, 1, channel_multiplier * in_channels, out_channels]`. Pointwise - filter to mix channels after `depthwise_filter` has convolved spatially. -* `strides`: 1-D of size 4. The strides for the depthwise convolution for - each dimension of `input`. -* `padding`: A string, either `'VALID'` or `'SAME'`. The padding algorithm. -* `name`: A name for this operation (optional). - -##### Returns: - - A 4-D `Tensor` of shape `[batch, out_height, out_width, out_channels]`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.sigmoid_cross_entropy_with_logits.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.sigmoid_cross_entropy_with_logits.md deleted file mode 100644 index c449554fb8..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.sigmoid_cross_entropy_with_logits.md +++ /dev/null @@ -1,48 +0,0 @@ -### `tf.nn.sigmoid_cross_entropy_with_logits(logits, targets, name=None)` {#sigmoid_cross_entropy_with_logits} - -Computes sigmoid cross entropy given `logits`. - -Measures the probability error in discrete classification tasks in which each -class is independent and not mutually exclusive. For instance, one could -perform multilabel classification where a picture can contain both an elephant -and a dog at the same time. - -For brevity, let `x = logits`, `z = targets`. The logistic loss is - - z * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x)) - = z * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x))) - = z * log(1 + exp(-x)) + (1 - z) * (-log(exp(-x)) + log(1 + exp(-x))) - = z * log(1 + exp(-x)) + (1 - z) * (x + log(1 + exp(-x)) - = (1 - z) * x + log(1 + exp(-x)) - = x - x * z + log(1 + exp(-x)) - -For x < 0, to avoid overflow in exp(-x), we reformulate the above - - x - x * z + log(1 + exp(-x)) - = log(exp(x)) - x * z + log(1 + exp(-x)) - = - x * z + log(1 + exp(x)) - -Hence, to ensure stability and avoid overflow, the implementation uses this -equivalent formulation - - max(x, 0) - x * z + log(1 + exp(-abs(x))) - -`logits` and `targets` must have the same type and shape. - -##### Args: - - -* `logits`: A `Tensor` of type `float32` or `float64`. -* `targets`: A `Tensor` of the same type and shape as `logits`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of the same shape as `logits` with the componentwise - logistic losses. - -##### Raises: - - -* `ValueError`: If `logits` and `targets` do not have the same shape. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.softsign.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.softsign.md new file mode 100644 index 0000000000..971b2a8134 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.softsign.md @@ -0,0 +1,14 @@ +### `tf.nn.softsign(features, name=None)` {#softsign} + +Computes softsign: `features / (abs(features) + 1)`. + +##### Args: + + +* `features`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `features`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.sparse_softmax_cross_entropy_with_logits.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.sparse_softmax_cross_entropy_with_logits.md deleted file mode 100644 index 6d53d84c5b..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.sparse_softmax_cross_entropy_with_logits.md +++ /dev/null @@ -1,38 +0,0 @@ -### `tf.nn.sparse_softmax_cross_entropy_with_logits(logits, labels, name=None)` {#sparse_softmax_cross_entropy_with_logits} - -Computes sparse softmax cross entropy between `logits` and `labels`. - -Measures the probability error in discrete classification tasks in which the -classes are mutually exclusive (each entry is in exactly one class). For -example, each CIFAR-10 image is labeled with one and only one label: an image -can be a dog or a truck, but not both. - -**NOTE:** For this operation, the probability of a given label is considered -exclusive. That is, soft classes are not allowed, and the `labels` vector -must provide a single specific index for the true class for each row of -`logits` (each minibatch entry). For soft softmax classification with -a probability distribution for each entry, see -`softmax_cross_entropy_with_logits`. - -**WARNING:** This op expects unscaled logits, since it performs a softmax -on `logits` internally for efficiency. Do not call this op with the -output of `softmax`, as it will produce incorrect results. - -`logits` must have the shape `[batch_size, num_classes]` -and dtype `float32` or `float64`. - -`labels` must have the shape `[batch_size]` and dtype `int32` or `int64`. - -##### Args: - - -* `logits`: Unscaled log probabilities. -* `labels`: Each entry `labels[i]` must be an index in `[0, num_classes)`. Other - values will result in a loss of 0, but incorrect gradient computations. -* `name`: A name for the operation (optional). - -##### Returns: - - A 1-D `Tensor` of length `batch_size` of the same type as `logits` with the - softmax cross entropy loss. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.no_regularizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.no_regularizer.md new file mode 100644 index 0000000000..cb55675641 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.no_regularizer.md @@ -0,0 +1,4 @@ +### `tf.no_regularizer(_)` {#no_regularizer} + +Use this function to prevent regularization of variables. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.one_hot.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.one_hot.md deleted file mode 100644 index eebb6ab643..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.one_hot.md +++ /dev/null @@ -1,129 +0,0 @@ -### `tf.one_hot(indices, depth, on_value=None, off_value=None, axis=None, dtype=None, name=None)` {#one_hot} - -Returns a one-hot tensor. - -The locations represented by indices in `indices` take value `on_value`, -while all other locations take value `off_value`. - -`on_value` and `off_value` must have matching data types. If `dtype` is also -provided, they must be the same data type as specified by `dtype`. - -If `on_value` is not provided, it will default to the value `1` with type -`dtype` - -If `off_value` is not provided, it will default to the value `0` with type -`dtype` - -If the input `indices` is rank `N`, the output will have rank `N+1`. The -new axis is created at dimension `axis` (default: the new axis is appended -at the end). - -If `indices` is a scalar the output shape will be a vector of length `depth` - -If `indices` is a vector of length `features`, the output shape will be: -``` - features x depth if axis == -1 - depth x features if axis == 0 -``` - -If `indices` is a matrix (batch) with shape `[batch, features]`, the output -shape will be: -``` - batch x features x depth if axis == -1 - batch x depth x features if axis == 1 - depth x batch x features if axis == 0 -``` - -If `dtype` is not provided, it will attempt to assume the data type of -`on_value` or `off_value`, if one or both are passed in. If none of -`on_value`, `off_value`, or `dtype` are provided, `dtype` will default to the -value `tf.float32` - -Note: If a non-numeric data type output is desired (tf.string, tf.bool, etc.), -both `on_value` and `off_value` _must_ be provided to `one_hot` - -Examples -========= - -Suppose that - -``` - indices = [0, 2, -1, 1] - depth = 3 - on_value = 5.0 - off_value = 0.0 - axis = -1 -``` - -Then output is `[4 x 3]`: - -``` - output = - [5.0 0.0 0.0] // one_hot(0) - [0.0 0.0 5.0] // one_hot(2) - [0.0 0.0 0.0] // one_hot(-1) - [0.0 5.0 0.0] // one_hot(1) -``` - -Suppose that - -``` - indices = [[0, 2], [1, -1]] - depth = 3 - on_value = 1.0 - off_value = 0.0 - axis = -1 -``` - -Then output is `[2 x 2 x 3]`: - -``` - output = - [ - [1.0, 0.0, 0.0] // one_hot(0) - [0.0, 0.0, 1.0] // one_hot(2) - ][ - [0.0, 1.0, 0.0] // one_hot(1) - [0.0, 0.0, 0.0] // one_hot(-1) - ] -``` - -Using default values for `on_value` and `off_value`: - -``` - indices = [0, 1, 2] - depth = 3 -``` - -The output will be - -``` - output = - [[1., 0., 0.], - [0., 1., 0.], - [0., 0., 1.]] -``` - -##### Args: - - -* `indices`: A `Tensor` of indices. -* `depth`: A scalar defining the depth of the one hot dimension. -* `on_value`: A scalar defining the value to fill in output when `indices[j] - = i`. (default: 1) -* `off_value`: A scalar defining the value to fill in output when `indices[j] - != i`. (default: 0) -* `axis`: The axis to fill (default: -1, a new inner-most axis). -* `dtype`: The data type of the output tensor. - -##### Returns: - - -* `output`: The one-hot tensor. - -##### Raises: - - -* `TypeError`: If dtype of either `on_value` or `off_value` don't match `dtype` -* `TypeError`: If dtype of `on_value` and `off_value` don't match one another - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.ones_initializer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.ones_initializer.md new file mode 100644 index 0000000000..0ddbc8b801 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.ones_initializer.md @@ -0,0 +1,4 @@ +### `tf.ones_initializer(shape, dtype=tf.float32)` {#ones_initializer} + +An adaptor for ones() to match the Initializer spec. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.op_scope.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.op_scope.md new file mode 100644 index 0000000000..c1002fd125 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.op_scope.md @@ -0,0 +1,36 @@ +### `tf.op_scope(values, name, default_name=None)` {#op_scope} + +Returns a context manager for use when defining a Python op. + +This context manager validates that the given `values` are from the +same graph, ensures that graph is the default graph, and pushes a +name scope. + +For example, to define a new Python op called `my_op`: + +```python +def my_op(a, b, c, name=None): + with tf.op_scope([a, b, c], name, "MyOp") as scope: + a = tf.convert_to_tensor(a, name="a") + b = tf.convert_to_tensor(b, name="b") + c = tf.convert_to_tensor(c, name="c") + # Define some computation that uses `a`, `b`, and `c`. + return foo_op(..., name=scope) +``` + +##### Args: + + +* `values`: The list of `Tensor` arguments that are passed to the op function. +* `name`: The name argument that is passed to the op function. +* `default_name`: The default name to use if the `name` argument is `None`. + +##### Returns: + + A context manager for use in defining Python ops. Yields the name scope. + +##### Raises: + + +* `ValueError`: if neither `name` nor `default_name` is provided. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.py_func.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.py_func.md deleted file mode 100644 index c115d21781..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.py_func.md +++ /dev/null @@ -1,31 +0,0 @@ -### `tf.py_func(func, inp, Tout, name=None)` {#py_func} - -Wraps a python function and uses it as a tensorflow op. - -Given a python function `func`, which takes numpy arrays as its -inputs and returns numpy arrays as its outputs. E.g., - -```python -def my_func(x): - # x will be a numpy array with the contents of the placeholder below - return np.sinh(x) -inp = tf.placeholder(tf.float32, [...]) -y = py_func(my_func, [inp], [tf.float32]) -``` - -The above snippet constructs a tf graph which invokes a numpy -sinh(x) as an op in the graph. - -##### Args: - - -* `func`: A python function. -* `inp`: A list of `Tensor`. -* `Tout`: A list of tensorflow data types indicating what `func` - returns. -* `name`: A name for the operation (optional). - -##### Returns: - - A list of `Tensor` which `func` computes. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.random_normal.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.random_normal.md new file mode 100644 index 0000000000..1344423202 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.random_normal.md @@ -0,0 +1,23 @@ +### `tf.random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)` {#random_normal} + +Outputs random values from a normal distribution. + +##### Args: + + +* `shape`: A 1-D integer Tensor or Python array. The shape of the output tensor. +* `mean`: A 0-D Tensor or Python value of type `dtype`. The mean of the normal + distribution. +* `stddev`: A 0-D Tensor or Python value of type `dtype`. The standard deviation + of the normal distribution. +* `dtype`: The type of the output. +* `seed`: A Python integer. Used to create a random seed for the distribution. + See + [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) + for behavior. +* `name`: A name for the operation (optional). + +##### Returns: + + A tensor of the specified shape filled with random normal values. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.random_normal_initializer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.random_normal_initializer.md deleted file mode 100644 index 9f229e3b1c..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.random_normal_initializer.md +++ /dev/null @@ -1,25 +0,0 @@ -### `tf.random_normal_initializer(mean=0.0, stddev=1.0, seed=None, dtype=tf.float32)` {#random_normal_initializer} - -Returns an initializer that generates tensors with a normal distribution. - -##### Args: - - -* `mean`: a python scalar or a scalar tensor. Mean of the random values - to generate. -* `stddev`: a python scalar or a scalar tensor. Standard deviation of the - random values to generate. -* `seed`: A Python integer. Used to create random seeds. See - [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) - for behavior. -* `dtype`: The data type. Only floating point types are supported. - -##### Returns: - - An initializer that generates tensors with a normal distribution. - -##### Raises: - - -* `ValueError`: if `dtype` is not a floating point type. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.reduce_all.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.reduce_all.md new file mode 100644 index 0000000000..3137d5c49e --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.reduce_all.md @@ -0,0 +1,35 @@ +### `tf.reduce_all(input_tensor, reduction_indices=None, keep_dims=False, name=None)` {#reduce_all} + +Computes the "logical and" of elements across dimensions of a tensor. + +Reduces `input_tensor` along the dimensions given in `reduction_indices`. +Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each +entry in `reduction_indices`. If `keep_dims` is true, the reduced dimensions +are retained with length 1. + +If `reduction_indices` has no entries, all dimensions are reduced, and a +tensor with a single element is returned. + +For example: + +```python +# 'x' is [[True, True] +# [False, False]] +tf.reduce_all(x) ==> False +tf.reduce_all(x, 0) ==> [False, False] +tf.reduce_all(x, 1) ==> [True, False] +``` + +##### Args: + + +* `input_tensor`: The boolean tensor to reduce. +* `reduction_indices`: The dimensions to reduce. If `None` (the default), + reduces all dimensions. +* `keep_dims`: If true, retains reduced dimensions with length 1. +* `name`: A name for the operation (optional). + +##### Returns: + + The reduced tensor. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.reduce_mean.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.reduce_mean.md deleted file mode 100644 index af446b6c53..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.reduce_mean.md +++ /dev/null @@ -1,35 +0,0 @@ -### `tf.reduce_mean(input_tensor, reduction_indices=None, keep_dims=False, name=None)` {#reduce_mean} - -Computes the mean of elements across dimensions of a tensor. - -Reduces `input_tensor` along the dimensions given in `reduction_indices`. -Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each -entry in `reduction_indices`. If `keep_dims` is true, the reduced dimensions -are retained with length 1. - -If `reduction_indices` has no entries, all dimensions are reduced, and a -tensor with a single element is returned. - -For example: - -```python -# 'x' is [[1., 1.] -# [2., 2.]] -tf.reduce_mean(x) ==> 1.5 -tf.reduce_mean(x, 0) ==> [1.5, 1.5] -tf.reduce_mean(x, 1) ==> [1., 2.] -``` - -##### Args: - - -* `input_tensor`: The tensor to reduce. Should have numeric type. -* `reduction_indices`: The dimensions to reduce. If `None` (the default), - reduces all dimensions. -* `keep_dims`: If true, retains reduced dimensions with length 1. -* `name`: A name for the operation (optional). - -##### Returns: - - The reduced tensor. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.register_tensor_conversion_function.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.register_tensor_conversion_function.md new file mode 100644 index 0000000000..e15dcf7b40 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.register_tensor_conversion_function.md @@ -0,0 +1,42 @@ +### `tf.register_tensor_conversion_function(base_type, conversion_func, priority=100)` {#register_tensor_conversion_function} + +Registers a function for converting objects of `base_type` to `Tensor`. + +The conversion function must have the following signature: + + def conversion_func(value, dtype=None, name=None, as_ref=False): + # ... + +It must return a `Tensor` with the given `dtype` if specified. If the +conversion function creates a new `Tensor`, it should use the given +`name` if specified. All exceptions will be propagated to the caller. + +The conversion function may return `NotImplemented` for some +inputs. In this case, the conversion process will continue to try +subsequent conversion functions. + +If `as_ref` is true, the function must return a `Tensor` reference, +such as a `Variable`. + +NOTE: The conversion functions will execute in order of priority, +followed by order of registration. To ensure that a conversion function +`F` runs before another conversion function `G`, ensure that `F` is +registered with a smaller priority than `G`. + +##### Args: + + +* `base_type`: The base type or tuple of base types for all objects that + `conversion_func` accepts. +* `conversion_func`: A function that converts instances of `base_type` to + `Tensor`. +* `priority`: Optional integer that indicates the priority for applying this + conversion function. Conversion functions with smaller priority values + run earlier than conversion functions with larger priority values. + Defaults to 100. + +##### Raises: + + +* `TypeError`: If the arguments do not have the appropriate type. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.rsqrt.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.rsqrt.md deleted file mode 100644 index 5e8b1bc917..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.rsqrt.md +++ /dev/null @@ -1,16 +0,0 @@ -### `tf.rsqrt(x, name=None)` {#rsqrt} - -Computes reciprocal of square root of x element-wise. - -I.e., \\(y = 1 / \sqrt{x}\\). - -##### Args: - - -* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `x`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.scan.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.scan.md deleted file mode 100644 index 6ea0ac677b..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.scan.md +++ /dev/null @@ -1,44 +0,0 @@ -### `tf.scan(fn, elems, initializer=None, parallel_iterations=10, back_prop=True, swap_memory=False, name=None)` {#scan} - -scan on the list of tensors unpacked from `elems` on dimension 0. - -This scan operator repeatedly applies the callable `fn` to a sequence -of elements from first to last. The elements are made of the tensors -unpacked from `elems` on dimension 0. The callable fn takes two tensors as -arguments. The first argument is the accumulated value computed from the -preceding invocation of fn. If `initializer` is None, `elems` must contain -at least one element, and its first element is used as the initializer. - -Suppose that `elems` is unpacked into `values`, a list of tensors. The shape -of the result tensor is `[len(values)] + fn(initializer, values[0]).shape`. - -##### Args: - - -* `fn`: The callable to be performed. -* `elems`: A tensor to be unpacked on dimension 0. -* `initializer`: (optional) The initial value for the accumulator. -* `parallel_iterations`: (optional) The number of iterations allowed to run - in parallel. -* `back_prop`: (optional) True enables back propagation. -* `swap_memory`: (optional) True enables GPU-CPU memory swapping. -* `name`: (optional) Name prefix for the returned tensors. - -##### Returns: - - A tensor that packs the results of applying `fn` to the list of tensors - unpacked from `elems`, from first to last. - -##### Raises: - - -* `TypeError`: if `fn` is not callable. - -##### Example: - - ```python - elems = [1, 2, 3, 4, 5, 6] - sum = scan(lambda a, x: a + x, elems) - # sum == [1, 3, 6, 10, 15, 21] - ``` - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.scatter_sub.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.scatter_sub.md new file mode 100644 index 0000000000..8f1afc42f6 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.scatter_sub.md @@ -0,0 +1,44 @@ +### `tf.scatter_sub(ref, indices, updates, use_locking=None, name=None)` {#scatter_sub} + +Subtracts sparse updates to a variable reference. + + # Scalar indices + ref[indices, ...] -= updates[...] + + # Vector indices (for each i) + ref[indices[i], ...] -= updates[i, ...] + + # High rank indices (for each i, ..., j) + ref[indices[i, ..., j], ...] -= updates[i, ..., j, ...] + +This operation outputs `ref` after the update is done. +This makes it easier to chain operations that need to use the reset value. + +Duplicate entries are handled correctly: if multiple `indices` reference +the same location, their (negated) contributions add. + +Requires `updates.shape = indices.shape + ref.shape[1:]`. + +
+ +
+ +##### Args: + + +* `ref`: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`. + Should be from a `Variable` node. +* `indices`: A `Tensor`. Must be one of the following types: `int32`, `int64`. + A tensor of indices into the first dimension of `ref`. +* `updates`: A `Tensor`. Must have the same type as `ref`. + A tensor of updated values to subtract from `ref`. +* `use_locking`: An optional `bool`. Defaults to `False`. + If True, the subtraction will be protected by a lock; + otherwise the behavior is undefined, but may exhibit less contention. +* `name`: A name for the operation (optional). + +##### Returns: + + Same as `ref`. Returned as a convenience for operations that want + to use the updated values after the update is done. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.segment_min.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.segment_min.md deleted file mode 100644 index 5cacf2cf72..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.segment_min.md +++ /dev/null @@ -1,31 +0,0 @@ -### `tf.segment_min(data, segment_ids, name=None)` {#segment_min} - -Computes the minimum along segments of a tensor. - -Read [the section on -Segmentation](../../api_docs/python/math_ops.md#segmentation) for an explanation -of segments. - -Computes a tensor such that -\\(output_i = \min_j(data_j)\\) where `min` is over `j` such -that `segment_ids[j] == i`. - -
- -
- -##### Args: - - -* `data`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`. -* `segment_ids`: A `Tensor`. Must be one of the following types: `int32`, `int64`. - A 1-D tensor whose rank is equal to the rank of `data`'s - first dimension. Values should be sorted and can be repeated. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `data`. - Has same shape as data, except for dimension 0 which - has size `k`, the number of segments. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.segment_sum.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.segment_sum.md new file mode 100644 index 0000000000..eeffe1601a --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.segment_sum.md @@ -0,0 +1,30 @@ +### `tf.segment_sum(data, segment_ids, name=None)` {#segment_sum} + +Computes the sum along segments of a tensor. + +Read [the section on Segmentation](../../api_docs/python/math_ops.md#segmentation) +for an explanation of segments. + +Computes a tensor such that +\\(output_i = \sum_j data_j\\) where sum is over `j` such +that `segment_ids[j] == i`. + +
+ +
+ +##### Args: + + +* `data`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`. +* `segment_ids`: A `Tensor`. Must be one of the following types: `int32`, `int64`. + A 1-D tensor whose rank is equal to the rank of `data`'s + first dimension. Values should be sorted and can be repeated. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `data`. + Has same shape as data, except for dimension 0 which + has size `k`, the number of segments. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.shape_n.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.shape_n.md deleted file mode 100644 index a229253406..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.shape_n.md +++ /dev/null @@ -1,16 +0,0 @@ -### `tf.shape_n(input, name=None)` {#shape_n} - -Returns shape of tensors. - -This operation returns N 1-D integer tensors representing shape of `input[i]s`. - -##### Args: - - -* `input`: A list of at least 1 `Tensor` objects of the same type. -* `name`: A name for the operation (optional). - -##### Returns: - - A list with the same number of `Tensor` objects as `input` of `Tensor` objects of type `int32`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_fill_empty_rows.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_fill_empty_rows.md new file mode 100644 index 0000000000..3ea1697f3d --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_fill_empty_rows.md @@ -0,0 +1,54 @@ +### `tf.sparse_fill_empty_rows(sp_input, default_value, name=None)` {#sparse_fill_empty_rows} + +Fills empty rows in the input 2-D `SparseTensor` with a default value. + +This op adds entries with the specified `default_value` at index +`[row, 0]` for any row in the input that does not already have a value. + +For example, suppose `sp_input` has shape `[5, 6]` and non-empty values: + + [0, 1]: a + [0, 3]: b + [2, 0]: c + [3, 1]: d + +Rows 1 and 4 are empty, so the output will be of shape `[5, 6]` with values: + + [0, 1]: a + [0, 3]: b + [1, 0]: default_value + [2, 0]: c + [3, 1]: d + [4, 0]: default_value + +Note that the input may have empty columns at the end, with no effect on +this op. + +The output `SparseTensor` will be in row-major order and will have the +same shape as the input. + +This op also returns an indicator vector such that + + empty_row_indicator[i] = True iff row i was an empty row. + +##### Args: + + +* `sp_input`: A `SparseTensor` with shape `[N, M]`. +* `default_value`: The value to fill for empty rows, with the same type as + `sp_input.` +* `name`: A name prefix for the returned tensors (optional) + +##### Returns: + + +* `sp_ordered_output`: A `SparseTensor` with shape `[N, M]`, and with all empty + rows filled in with `default_value`. +* `empty_row_indicator`: A bool vector of length `N` indicating whether each + input row was empty. + +##### Raises: + + +* `TypeError`: If `sp_input` is not a `SparseTensor`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_placeholder.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_placeholder.md new file mode 100644 index 0000000000..def6c8329d --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_placeholder.md @@ -0,0 +1,43 @@ +### `tf.sparse_placeholder(dtype, shape=None, name=None)` {#sparse_placeholder} + +Inserts a placeholder for a sparse tensor that will be always fed. + +**Important**: This sparse tensor will produce an error if evaluated. +Its value must be fed using the `feed_dict` optional argument to +`Session.run()`, `Tensor.eval()`, or `Operation.run()`. + +For example: + +```python +x = tf.sparse_placeholder(tf.float32) +y = tf.sparse_reduce_sum(x) + +with tf.Session() as sess: + print(sess.run(y)) # ERROR: will fail because x was not fed. + + indices = np.array([[3, 2, 0], [4, 5, 1]], dtype=np.int64) + values = np.array([1.0, 2.0], dtype=np.float32) + shape = np.array([7, 9, 2], dtype=np.int64) + print(sess.run(y, feed_dict={ + x: tf.SparseTensorValue(indices, values, shape)})) # Will succeed. + print(sess.run(y, feed_dict={ + x: (indices, values, shape)})) # Will succeed. + + sp = tf.SparseTensor(indices=indices, values=values, shape=shape) + sp_value = sp.eval(session) + print(sess.run(y, feed_dict={x: sp_value})) # Will succeed. +``` + +##### Args: + + +* `dtype`: The type of `values` elements in the tensor to be fed. +* `shape`: The shape of the tensor to be fed (optional). If the shape is not + specified, you can feed a sparse tensor of any shape. +* `name`: A name for prefixing the operations (optional). + +##### Returns: + + A `SparseTensor` that may be used as a handle for feeding a value, but not + evaluated directly. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_reorder.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_reorder.md new file mode 100644 index 0000000000..1e7b8fd857 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_reorder.md @@ -0,0 +1,41 @@ +### `tf.sparse_reorder(sp_input, name=None)` {#sparse_reorder} + +Reorders a `SparseTensor` into the canonical, row-major ordering. + +Note that by convention, all sparse ops preserve the canonical ordering +along increasing dimension number. The only time ordering can be violated +is during manual manipulation of the indices and values to add entries. + +Reordering does not affect the shape of the `SparseTensor`. + +For example, if `sp_input` has shape `[4, 5]` and `indices` / `values`: + + [0, 3]: b + [0, 1]: a + [3, 1]: d + [2, 0]: c + +then the output will be a `SparseTensor` of shape `[4, 5]` and +`indices` / `values`: + + [0, 1]: a + [0, 3]: b + [2, 0]: c + [3, 1]: d + +##### Args: + + +* `sp_input`: The input `SparseTensor`. +* `name`: A name prefix for the returned tensors (optional) + +##### Returns: + + A `SparseTensor` with the same shape and non-empty values, but in + canonical ordering. + +##### Raises: + + +* `TypeError`: If `sp_input` is not a `SparseTensor`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_segment_sum.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_segment_sum.md new file mode 100644 index 0000000000..6691a6b7bc --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_segment_sum.md @@ -0,0 +1,50 @@ +### `tf.sparse_segment_sum(data, indices, segment_ids, name=None)` {#sparse_segment_sum} + +Computes the sum along sparse segments of a tensor. + +Read [the section on +Segmentation](../../api_docs/python/math_ops.md#segmentation) for an explanation +of segments. + +Like `SegmentSum`, but `segment_ids` can have rank less than `data`'s first +dimension, selecting a subset of dimension 0, specified by `indices`. + +For example: + +```prettyprint +c = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]]) + +# Select two rows, one segment. +tf.sparse_segment_sum(c, tf.constant([0, 1]), tf.constant([0, 0])) + ==> [[0 0 0 0]] + +# Select two rows, two segment. +tf.sparse_segment_sum(c, tf.constant([0, 1]), tf.constant([0, 1])) + ==> [[ 1 2 3 4] + [-1 -2 -3 -4]] + +# Select all rows, two segments. +tf.sparse_segment_sum(c, tf.constant([0, 1, 2]), tf.constant([0, 0, 1])) + ==> [[0 0 0 0] + [5 6 7 8]] + +# Which is equivalent to: +tf.segment_sum(c, tf.constant([0, 0, 1])) +``` + +##### Args: + + +* `data`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`. +* `indices`: A `Tensor` of type `int32`. + A 1-D tensor. Has same rank as `segment_ids`. +* `segment_ids`: A `Tensor` of type `int32`. + A 1-D tensor. Values should be sorted and can be repeated. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `data`. + Has same shape as data, except for dimension 0 which + has size `k`, the number of segments. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_split.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_split.md deleted file mode 100644 index e3e608a9e2..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_split.md +++ /dev/null @@ -1,40 +0,0 @@ -### `tf.sparse_split(split_dim, num_split, sp_input, name=None)` {#sparse_split} - -Split a `SparseTensor` into `num_split` tensors along `split_dim`. - -If the `sp_input.shape[split_dim]` is not an integer multiple of `num_split` -each slice starting from 0:`shape[split_dim] % num_split` gets extra one -dimension. For example, if `split_dim = 1` and `num_split = 2` and the -input is: - - input_tensor = shape = [2, 7] - [ a d e ] - [b c ] - -Graphically the output tensors are: - - output_tensor[0] = - [ a ] - [b c ] - - output_tensor[1] = - [ d e ] - [ ] - -##### Args: - - -* `split_dim`: A 0-D `int32` `Tensor`. The dimension along which to split. -* `num_split`: A Python integer. The number of ways to split. -* `sp_input`: The `SparseTensor` to split. -* `name`: A name for the operation (optional). - -##### Returns: - - `num_split` `SparseTensor` objects resulting from splitting `value`. - -##### Raises: - - -* `TypeError`: If `sp_input` is not a `SparseTensor`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.string_to_hash_bucket.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.string_to_hash_bucket.md new file mode 100644 index 0000000000..941d50e139 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.string_to_hash_bucket.md @@ -0,0 +1,21 @@ +### `tf.string_to_hash_bucket(string_tensor, num_buckets, name=None)` {#string_to_hash_bucket} + +Converts each string in the input Tensor to its hash mod by a number of buckets. + +The hash function is deterministic on the content of the string within the +process. + +Note that the hash function may change from time to time. + +##### Args: + + +* `string_tensor`: A `Tensor` of type `string`. +* `num_buckets`: An `int` that is `>= 1`. The number of buckets. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `int64`. + A Tensor of the same shape as the input `string_tensor`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.test.get_temp_dir.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.test.get_temp_dir.md new file mode 100644 index 0000000000..e36d6163a7 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.test.get_temp_dir.md @@ -0,0 +1,10 @@ +### `tf.test.get_temp_dir()` {#get_temp_dir} + +Returns a temporary directory for use during tests. + +There is no need to delete the directory after the test. + +##### Returns: + + The temporary directory. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.test.is_built_with_cuda.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.test.is_built_with_cuda.md deleted file mode 100644 index 51e3d97d8c..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.test.is_built_with_cuda.md +++ /dev/null @@ -1,4 +0,0 @@ -### `tf.test.is_built_with_cuda()` {#is_built_with_cuda} - -Returns whether TensorFlow was built with CUDA (GPU) support. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.tile.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.tile.md new file mode 100644 index 0000000000..650f1f7eb8 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.tile.md @@ -0,0 +1,22 @@ +### `tf.tile(input, multiples, name=None)` {#tile} + +Constructs a tensor by tiling a given tensor. + +This operation creates a new tensor by replicating `input` `multiples` times. +The output tensor's i'th dimension has `input.dims(i) * multiples[i]` elements, +and the values of `input` are replicated `multiples[i]` times along the 'i'th +dimension. For example, tiling `[a b c d]` by `[2]` produces +`[a b c d a b c d]`. + +##### Args: + + +* `input`: A `Tensor`. 1-D or higher. +* `multiples`: A `Tensor` of type `int32`. + 1-D. Length must be the same as the number of dimensions in `input` +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `input`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.to_int64.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.to_int64.md deleted file mode 100644 index 0762822b3d..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.to_int64.md +++ /dev/null @@ -1,19 +0,0 @@ -### `tf.to_int64(x, name='ToInt64')` {#to_int64} - -Casts a tensor to type `int64`. - -##### Args: - - -* `x`: A `Tensor` or `SparseTensor`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` or `SparseTensor` with same shape as `x` with type `int64`. - -##### Raises: - - -* `TypeError`: If `x` cannot be cast to the `int64`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.AdagradOptimizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.AdagradOptimizer.md deleted file mode 100644 index 35e416386e..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.AdagradOptimizer.md +++ /dev/null @@ -1,26 +0,0 @@ -Optimizer that implements the Adagrad algorithm. - -See this [paper](http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf). - -- - - - -#### `tf.train.AdagradOptimizer.__init__(learning_rate, initial_accumulator_value=0.1, use_locking=False, name='Adagrad')` {#AdagradOptimizer.__init__} - -Construct a new Adagrad optimizer. - -##### Args: - - -* `learning_rate`: A `Tensor` or a floating point value. The learning rate. -* `initial_accumulator_value`: A floating point value. - Starting value for the accumulators, must be positive. -* `use_locking`: If `True` use locks for update operations. -* `name`: Optional name prefix for the operations created when applying - gradients. Defaults to "Adagrad". - -##### Raises: - - -* `ValueError`: If the `initial_accumulator_value` is invalid. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.AdamOptimizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.AdamOptimizer.md deleted file mode 100644 index 8667ec8ed3..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.AdamOptimizer.md +++ /dev/null @@ -1,49 +0,0 @@ -Optimizer that implements the Adam algorithm. - -See [Kingma et. al., 2014](http://arxiv.org/abs/1412.6980) -([pdf](http://arxiv.org/pdf/1412.6980.pdf)). - -- - - - -#### `tf.train.AdamOptimizer.__init__(learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08, use_locking=False, name='Adam')` {#AdamOptimizer.__init__} - -Construct a new Adam optimizer. - -Initialization: - -``` -m_0 <- 0 (Initialize initial 1st moment vector) -v_0 <- 0 (Initialize initial 2nd moment vector) -t <- 0 (Initialize timestep) -``` - -The update rule for `variable` with gradient `g` uses an optimization -described at the end of section2 of the paper: - -``` -t <- t + 1 -lr_t <- learning_rate * sqrt(1 - beta2^t) / (1 - beta1^t) - -m_t <- beta1 * m_{t-1} + (1 - beta1) * g -v_t <- beta2 * v_{t-1} + (1 - beta2) * g * g -variable <- variable - lr_t * m_t / (sqrt(v_t) + epsilon) -``` - -The default value of 1e-8 for epsilon might not be a good default in -general. For example, when training an Inception network on ImageNet a -current good choice is 1.0 or 0.1. - -##### Args: - - -* `learning_rate`: A Tensor or a floating point value. The learning rate. -* `beta1`: A float value or a constant float tensor. - The exponential decay rate for the 1st moment estimates. -* `beta2`: A float value or a constant float tensor. - The exponential decay rate for the 2nd moment estimates. -* `epsilon`: A small constant for numerical stability. -* `use_locking`: If True use locks for update operations. -* `name`: Optional name for the operations created when applying gradients. - Defaults to "Adam". - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.Coordinator.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.Coordinator.md deleted file mode 100644 index f51c0721ff..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.Coordinator.md +++ /dev/null @@ -1,223 +0,0 @@ -A coordinator for threads. - -This class implements a simple mechanism to coordinate the termination of a -set of threads. - -#### Usage: - -```python -# Create a coordinator. -coord = Coordinator() -# Start a number of threads, passing the coordinator to each of them. -...start thread 1...(coord, ...) -...start thread N...(coord, ...) -# Wait for all the threads to terminate. -coord.join(threads) -``` - -Any of the threads can call `coord.request_stop()` to ask for all the threads -to stop. To cooperate with the requests, each thread must check for -`coord.should_stop()` on a regular basis. `coord.should_stop()` returns -`True` as soon as `coord.request_stop()` has been called. - -A typical thread running with a coordinator will do something like: - -```python -while not coord.should_stop(): - ...do some work... -``` - -#### Exception handling: - -A thread can report an exception to the coordinator as part of the -`should_stop()` call. The exception will be re-raised from the -`coord.join()` call. - -Thread code: - -```python -try: - while not coord.should_stop(): - ...do some work... -except Exception as e: - coord.request_stop(e) -``` - -Main code: - -```python -try: - ... - coord = Coordinator() - # Start a number of threads, passing the coordinator to each of them. - ...start thread 1...(coord, ...) - ...start thread N...(coord, ...) - # Wait for all the threads to terminate. - coord.join(threads) -except Exception as e: - ...exception that was passed to coord.request_stop() -``` - -To simplify the thread implementation, the Coordinator provides a -context handler `stop_on_exception()` that automatically requests a stop if -an exception is raised. Using the context handler the thread code above -can be written as: - -```python -with coord.stop_on_exception(): - while not coord.should_stop(): - ...do some work... -``` - -#### Grace period for stopping: - -After a thread has called `coord.request_stop()` the other threads have a -fixed time to stop, this is called the 'stop grace period' and defaults to 2 -minutes. If any of the threads is still alive after the grace period expires -`coord.join()` raises a RuntimeException reporting the laggards. - -```python -try: - ... - coord = Coordinator() - # Start a number of threads, passing the coordinator to each of them. - ...start thread 1...(coord, ...) - ...start thread N...(coord, ...) - # Wait for all the threads to terminate, give them 10s grace period - coord.join(threads, stop_grace_period_secs=10) -except RuntimeException: - ...one of the threads took more than 10s to stop after request_stop() - ...was called. -except Exception: - ...exception that was passed to coord.request_stop() -``` -- - - - -#### `tf.train.Coordinator.__init__()` {#Coordinator.__init__} - -Create a new Coordinator. - - -- - - - -#### `tf.train.Coordinator.clear_stop()` {#Coordinator.clear_stop} - -Clears the stop flag. - -After this is called, calls to `should_stop()` will return `False`. - - -- - - - -#### `tf.train.Coordinator.join(threads, stop_grace_period_secs=120)` {#Coordinator.join} - -Wait for threads to terminate. - -Blocks until all `threads` have terminated or `request_stop()` is called. - -After the threads stop, if an `exc_info` was passed to `request_stop`, that -exception is re-raised. - -Grace period handling: When `request_stop()` is called, threads are given -'stop_grace_period_secs' seconds to terminate. If any of them is still -alive after that period expires, a `RuntimeError` is raised. Note that if -an `exc_info` was passed to `request_stop()` then it is raised instead of -that `RuntimeError`. - -##### Args: - - -* `threads`: List of `threading.Threads`. The started threads to join. -* `stop_grace_period_secs`: Number of seconds given to threads to stop after - `request_stop()` has been called. - -##### Raises: - - -* `RuntimeError`: If any thread is still alive after `request_stop()` - is called and the grace period expires. - - -- - - - -#### `tf.train.Coordinator.request_stop(ex=None)` {#Coordinator.request_stop} - -Request that the threads stop. - -After this is called, calls to `should_stop()` will return `True`. - -Note: If an exception is being passed in, in must be in the context of -handling the exception (i.e. `try: ... except Exception as ex: ...`) and not -a newly created one. - -##### Args: - - -* `ex`: Optional `Exception`, or Python `exc_info` tuple as returned by - `sys.exc_info()`. If this is the first call to `request_stop()` the - corresponding exception is recorded and re-raised from `join()`. - - -- - - - -#### `tf.train.Coordinator.should_stop()` {#Coordinator.should_stop} - -Check if stop was requested. - -##### Returns: - - True if a stop was requested. - - -- - - - -#### `tf.train.Coordinator.stop_on_exception()` {#Coordinator.stop_on_exception} - -Context manager to request stop when an Exception is raised. - -Code that uses a coordinator must catch exceptions and pass -them to the `request_stop()` method to stop the other threads -managed by the coordinator. - -This context handler simplifies the exception handling. -Use it as follows: - -```python -with coord.stop_on_exception(): - # Any exception raised in the body of the with - # clause is reported to the coordinator before terminating - # the execution of the body. - ...body... -``` - -This is completely equivalent to the slightly longer code: - -```python -try: - ...body... -exception Exception as ex: - coord.request_stop(ex) -``` - -##### Yields: - - nothing. - - -- - - - -#### `tf.train.Coordinator.wait_for_stop(timeout=None)` {#Coordinator.wait_for_stop} - -Wait till the Coordinator is told to stop. - -##### Args: - - -* `timeout`: Float. Sleep for up to that many seconds waiting for - should_stop() to become True. - -##### Returns: - - True if the Coordinator is told stop, False if the timeout expired. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.Server.create_local_server.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.Server.create_local_server.md deleted file mode 100644 index f349dc0748..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.Server.create_local_server.md +++ /dev/null @@ -1,19 +0,0 @@ -#### `tf.train.Server.create_local_server(start=True)` {#Server.create_local_server} - -Creates a new single-process cluster running on the local host. - -This method is a convenience wrapper for creating a -`tf.train.Server` with a `tf.train.ServerDef` that specifies a -single-process cluster containing a single task in a job called -`"local"`. - -##### Args: - - -* `start`: (Optional.) Boolean, indicating whether to start the server after - creating it. Defaults to `True`. - -##### Returns: - - A local `tf.train.Server`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.add_queue_runner.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.add_queue_runner.md new file mode 100644 index 0000000000..f5b9549ad8 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.add_queue_runner.md @@ -0,0 +1,18 @@ +### `tf.train.add_queue_runner(qr, collection='queue_runners')` {#add_queue_runner} + +Adds a `QueueRunner` to a collection in the graph. + +When building a complex model that uses many queues it is often difficult to +gather all the queue runners that need to be run. This convenience function +allows you to add a queue runner to a well known collection in the graph. + +The companion method `start_queue_runners()` can be used to start threads for +all the collected queue runners. + +##### Args: + + +* `qr`: A `QueueRunner`. +* `collection`: A `GraphKey` specifying the graph collection to add + the queue runner to. Defaults to `GraphKeys.QUEUE_RUNNERS`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.exponential_decay.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.exponential_decay.md deleted file mode 100644 index 2b8e72a0a2..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.exponential_decay.md +++ /dev/null @@ -1,54 +0,0 @@ -### `tf.train.exponential_decay(learning_rate, global_step, decay_steps, decay_rate, staircase=False, name=None)` {#exponential_decay} - -Applies exponential decay to the learning rate. - -When training a model, it is often recommended to lower the learning rate as -the training progresses. This function applies an exponential decay function -to a provided initial learning rate. It requires a `global_step` value to -compute the decayed learning rate. You can just pass a TensorFlow variable -that you increment at each training step. - -The function returns the decayed learning rate. It is computed as: - -```python -decayed_learning_rate = learning_rate * - decay_rate ^ (global_step / decay_steps) -``` - -If the argument `staircase` is `True`, then `global_step /decay_steps` is an -integer division and the decayed learning rate follows a staircase function. - -Example: decay every 100000 steps with a base of 0.96: - -```python -... -global_step = tf.Variable(0, trainable=False) -starter_learning_rate = 0.1 -learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step, - 100000, 0.96, staircase=True) -# Passing global_step to minimize() will increment it at each step. -learning_step = ( - tf.GradientDescentOptimizer(learning_rate) - .minimize(...my loss..., global_step=global_step) -) -``` - -##### Args: - - -* `learning_rate`: A scalar `float32` or `float64` `Tensor` or a - Python number. The initial learning rate. -* `global_step`: A scalar `int32` or `int64` `Tensor` or a Python number. - Global step to use for the decay computation. Must not be negative. -* `decay_steps`: A scalar `int32` or `int64` `Tensor` or a Python number. - Must be positive. See the decay computation above. -* `decay_rate`: A scalar `float32` or `float64` `Tensor` or a - Python number. The decay rate. -* `staircase`: Boolean. It `True` decay the learning rate at discrete intervals. -* `name`: String. Optional name of the operation. Defaults to 'ExponentialDecay' - -##### Returns: - - A scalar `Tensor` of the same type as `learning_rate`. The decayed - learning rate. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.global_step.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.global_step.md deleted file mode 100644 index a53175be6a..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.global_step.md +++ /dev/null @@ -1,27 +0,0 @@ -### `tf.train.global_step(sess, global_step_tensor)` {#global_step} - -Small helper to get the global step. - -```python -# Creates a variable to hold the global_step. -global_step_tensor = tf.Variable(10, trainable=False, name='global_step') -# Creates a session. -sess = tf.Session() -# Initializes the variable. -sess.run(global_step_tensor.initializer) -print('global_step: %s' % tf.train.global_step(sess, global_step_tensor)) - -global_step: 10 -``` - -##### Args: - - -* `sess`: A TensorFlow `Session` object. -* `global_step_tensor`: `Tensor` or the `name` of the operation that contains - the global step. - -##### Returns: - - The global step value. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.import_meta_graph.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.import_meta_graph.md deleted file mode 100644 index 21b3465076..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.import_meta_graph.md +++ /dev/null @@ -1,65 +0,0 @@ -### `tf.train.import_meta_graph(meta_graph_or_file)` {#import_meta_graph} - -Recreates a Graph saved in a `MetaGraphDef` proto. - -This function takes a `MetaGraphDef` protocol buffer as input. If -the argument is a file containing a `MetaGraphDef` protocol buffer , -it constructs a protocol buffer from the file content. The function -then adds all the nodes from the `graph_def` field to the -current graph, recreates all the collections, and returns a saver -constructed from the `saver_def` field. - -In combination with `export_meta_graph()`, this function can be used to - -* Serialize a graph along with other Python objects such as `QueueRunner`, - `Variable` into a `MetaGraphDef`. - -* Restart training from a saved graph and checkpoints. - -* Run inference from a saved graph and checkpoints. - -```Python -... -# Create a saver. -saver = tf.train.Saver(...variables...) -# Remember the training_op we want to run by adding it to a collection. -tf.add_to_collection('train_op', train_op) -sess = tf.Session() -for step in xrange(1000000): - sess.run(train_op) - if step % 1000 == 0: - # Saves checkpoint, which by default also exports a meta_graph - # named 'my-model-global_step.meta'. - saver.save(sess, 'my-model', global_step=step) -``` - -Later we can continue training from this saved `meta_graph` without building -the model from scratch. - -```Python -with tf.Session() as sess: - new_saver = tf.train.import_meta_graph('my-save-dir/my-model-10000.meta') - new_saver.restore(sess, 'my-save-dir/my-model-10000') - # tf.get_collection() returns a list. In this example we only want the - # first one. - train_op = tf.get_collection('train_op')[0] - for step in xrange(1000000): - sess.run(train_op) -``` - -NOTE: Restarting training from saved `meta_graph` only works if the -device assignments have not changed. - -##### Args: - - -* `meta_graph_or_file`: `MetaGraphDef` protocol buffer or filename (including - the path) containing a `MetaGraphDef`. - -##### Returns: - - A saver constructed from `saver_def` in `MetaGraphDef` or None. - - A None value is returned if no variables exist in the `MetaGraphDef` - (i.e., there are no variables to restore). - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.latest_checkpoint.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.latest_checkpoint.md deleted file mode 100644 index b1fc87cdd7..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.latest_checkpoint.md +++ /dev/null @@ -1,16 +0,0 @@ -### `tf.train.latest_checkpoint(checkpoint_dir, latest_filename=None)` {#latest_checkpoint} - -Finds the filename of latest saved checkpoint file. - -##### Args: - - -* `checkpoint_dir`: Directory where the variables were saved. -* `latest_filename`: Optional name for the protocol buffer file that - contains the list of most recent checkpoint filenames. - See the corresponding argument to `Saver.save()`. - -##### Returns: - - The full path to the latest checkpoint or `None` if no checkpoint was found. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.match_filenames_once.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.match_filenames_once.md new file mode 100644 index 0000000000..6c84221cc5 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.match_filenames_once.md @@ -0,0 +1,14 @@ +### `tf.train.match_filenames_once(pattern, name=None)` {#match_filenames_once} + +Save the list of files matching pattern, so it is only computed once. + +##### Args: + + +* `pattern`: A file pattern (glob). +* `name`: A name for the operations (optional). + +##### Returns: + + A variable that is initialized to the list of files matching pattern. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.summary_iterator.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.summary_iterator.md deleted file mode 100644 index 5702571441..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.summary_iterator.md +++ /dev/null @@ -1,42 +0,0 @@ -### `tf.train.summary_iterator(path)` {#summary_iterator} - -An iterator for reading `Event` protocol buffers from an event file. - -You can use this function to read events written to an event file. It returns -a Python iterator that yields `Event` protocol buffers. - -Example: Print the contents of an events file. - -```python -for e in tf.train.summary_iterator(path to events file): - print(e) -``` - -Example: Print selected summary values. - -```python -# This example supposes that the events file contains summaries with a -# summary value tag 'loss'. These could have been added by calling -# `add_summary()`, passing the output of a scalar summary op created with -# with: `tf.scalar_summary(['loss'], loss_tensor)`. -for e in tf.train.summary_iterator(path to events file): - for v in e.summary.value: - if v.tag == 'loss': - print(v.simple_value) -``` - -See the protocol buffer definitions of -[Event](https://www.tensorflow.org/code/tensorflow/core/util/event.proto) -and -[Summary](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto) -for more information about their attributes. - -##### Args: - - -* `path`: The path to an event file created by a `SummaryWriter`. - -##### Yields: - - `Event` protocol buffers. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.trainable_variables.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.trainable_variables.md deleted file mode 100644 index 894d64a2b4..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.trainable_variables.md +++ /dev/null @@ -1,13 +0,0 @@ -### `tf.trainable_variables()` {#trainable_variables} - -Returns all variables created with `trainable=True`. - -When passed `trainable=True`, the `Variable()` constructor automatically -adds new variables to the graph collection -`GraphKeys.TRAINABLE_VARIABLES`. This convenience function returns the -contents of that collection. - -##### Returns: - - A list of Variable objects. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.unique.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.unique.md deleted file mode 100644 index 0929f57b0f..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.unique.md +++ /dev/null @@ -1,33 +0,0 @@ -### `tf.unique(x, name=None)` {#unique} - -Finds unique elements in a 1-D tensor. - -This operation returns a tensor `y` containing all of the unique elements of `x` -sorted in the same order that they occur in `x`. This operation also returns a -tensor `idx` the same size as `x` that contains the index of each value of `x` -in the unique output `y`. In other words: - -`y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]` - -For example: - -```prettyprint -# tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8] -y, idx = unique(x) -y ==> [1, 2, 4, 7, 8] -idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4] -``` - -##### Args: - - -* `x`: A `Tensor`. 1-D. -* `name`: A name for the operation (optional). - -##### Returns: - - A tuple of `Tensor` objects (y, idx). - -* `y`: A `Tensor`. Has the same type as `x`. 1-D. -* `idx`: A `Tensor` of type `int32`. 1-D. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.verify_tensor_all_finite.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.verify_tensor_all_finite.md deleted file mode 100644 index 37fa105df5..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.verify_tensor_all_finite.md +++ /dev/null @@ -1,15 +0,0 @@ -### `tf.verify_tensor_all_finite(t, msg, name=None)` {#verify_tensor_all_finite} - -Assert that the tensor does not contain any NaN's or Inf's. - -##### Args: - - -* `t`: Tensor to check. -* `msg`: Message to log on failure. -* `name`: A name for this operation (optional). - -##### Returns: - - Same tensor as `t`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.zeta.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.zeta.md new file mode 100644 index 0000000000..ed66237d38 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.zeta.md @@ -0,0 +1,21 @@ +### `tf.zeta(x, q, name=None)` {#zeta} + +Compute the Hurwitz zeta function \\(\zeta(x, q)\\). + +The Hurwitz zeta function is defined as: + +``` +\zeta(x, q) = \sum_{n=0}^{\infty} (q + n)^{-x} +``` + +##### Args: + + +* `x`: A `Tensor`. Must be one of the following types: `float32`, `float64`. +* `q`: A `Tensor`. Must have the same type as `x`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `x`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.DeviceSpec.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.DeviceSpec.md new file mode 100644 index 0000000000..18c651a45d --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.DeviceSpec.md @@ -0,0 +1,146 @@ +Represents a (possibly partial) specification for a TensorFlow device. + +`DeviceSpec`s are used throughout TensorFlow to describe where state is stored +and computations occur. Using `DeviceSpec` allows you to parse device spec +strings to verify their validity, merge them or compose them programmatically. + +Example: +```python +# Place the operations on device "GPU:0" in the "ps" job. +device_spec = DeviceSpec(job="ps", device_type="GPU", device_index=0) +with tf.device(device_spec): + # Both my_var and squared_var will be placed on /job:ps/device:GPU:0. + my_var = tf.Variable(..., name="my_variable") + squared_var = tf.square(my_var) +``` + +If a `DeviceSpec` is partially specified, it will be merged with other +`DeviceSpec`s according to the scope in which it is defined. `DeviceSpec` +components defined in inner scopes take precedence over those defined in +outer scopes. + +```python +with tf.device(DeviceSpec(job="train", )): + with tf.device(DeviceSpec(job="ps", device_type="GPU", device_index=0): + # Nodes created here will be assigned to /job:ps/device:GPU:0. + with tf.device(DeviceSpec(device_type="GPU", device_index=1): + # Nodes created here will be assigned to /job:train/device:GPU:1. +``` + +A `DeviceSpec` consists of 5 components -- each of +which is optionally specified: + +* Job: The job name. +* Replica: The replica index. +* Task: The task index. +* Device type: The device type string (e.g. "CPU" or "GPU"). +* Device index: The device index. +- - - + +#### `tf.DeviceSpec.__init__(job=None, replica=None, task=None, device_type=None, device_index=None)` {#DeviceSpec.__init__} + +Create a new `DeviceSpec` object. + +##### Args: + + +* `job`: string. Optional job name. +* `replica`: int. Optional replica index. +* `task`: int. Optional task index. +* `device_type`: Optional device type string (e.g. "CPU" or "GPU") +* `device_index`: int. Optional device index. If left + unspecified, device represents 'any' device_index. + + +- - - + +#### `tf.DeviceSpec.from_string(spec)` {#DeviceSpec.from_string} + +Construct a `DeviceSpec` from a string. + +##### Args: + + +* `spec`: a string of the form + /job:/replica:/task:/device:CPU: + or + /job:/replica:/task:/device:GPU: + as cpu and gpu are mutually exclusive. + All entries are optional. + +##### Returns: + + A DeviceSpec. + + +- - - + +#### `tf.DeviceSpec.job` {#DeviceSpec.job} + + + + +- - - + +#### `tf.DeviceSpec.merge_from(dev)` {#DeviceSpec.merge_from} + +Merge the properties of "dev" into this `DeviceSpec`. + +##### Args: + + +* `dev`: a `DeviceSpec`. + + +- - - + +#### `tf.DeviceSpec.parse_from_string(spec)` {#DeviceSpec.parse_from_string} + +Parse a `DeviceSpec` name into its components. + +##### Args: + + +* `spec`: a string of the form + /job:/replica:/task:/device:CPU: + or + /job:/replica:/task:/device:GPU: + as cpu and gpu are mutually exclusive. + All entries are optional. + +##### Returns: + + The `DeviceSpec`. + +##### Raises: + + +* `ValueError`: if the spec was not valid. + + +- - - + +#### `tf.DeviceSpec.replica` {#DeviceSpec.replica} + + + + +- - - + +#### `tf.DeviceSpec.task` {#DeviceSpec.task} + + + + +- - - + +#### `tf.DeviceSpec.to_string()` {#DeviceSpec.to_string} + +Return a string representation of this `DeviceSpec`. + +##### Returns: + + a string of the form + /job:/replica:/task:/device::. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.FixedLenFeature.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.FixedLenFeature.md deleted file mode 100644 index 0ae54940a8..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.FixedLenFeature.md +++ /dev/null @@ -1,31 +0,0 @@ -Configuration for parsing a fixed-length input feature. - -To treat sparse input as dense, provide a `default_value`; otherwise, -the parse functions will fail on any examples missing this feature. - -Fields: - shape: Shape of input data. - dtype: Data type of input. - default_value: Value to be used if an example is missing this feature. It - must be compatible with `dtype`. -- - - - -#### `tf.FixedLenFeature.default_value` {#FixedLenFeature.default_value} - -Alias for field number 2 - - -- - - - -#### `tf.FixedLenFeature.dtype` {#FixedLenFeature.dtype} - -Alias for field number 1 - - -- - - - -#### `tf.FixedLenFeature.shape` {#FixedLenFeature.shape} - -Alias for field number 0 - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.InteractiveSession.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.InteractiveSession.md deleted file mode 100644 index cdb5101815..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.InteractiveSession.md +++ /dev/null @@ -1,68 +0,0 @@ -A TensorFlow `Session` for use in interactive contexts, such as a shell. - -The only difference with a regular `Session` is that an `InteractiveSession` -installs itself as the default session on construction. -The methods [`Tensor.eval()`](../../api_docs/python/framework.md#Tensor.eval) -and [`Operation.run()`](../../api_docs/python/framework.md#Operation.run) -will use that session to run ops. - -This is convenient in interactive shells and [IPython -notebooks](http://ipython.org), as it avoids having to pass an explicit -`Session` object to run ops. - -For example: - -```python -sess = tf.InteractiveSession() -a = tf.constant(5.0) -b = tf.constant(6.0) -c = a * b -# We can just use 'c.eval()' without passing 'sess' -print(c.eval()) -sess.close() -``` - -Note that a regular session installs itself as the default session when it -is created in a `with` statement. The common usage in non-interactive -programs is to follow that pattern: - -```python -a = tf.constant(5.0) -b = tf.constant(6.0) -c = a * b -with tf.Session(): - # We can also use 'c.eval()' here. - print(c.eval()) -``` - -- - - - -#### `tf.InteractiveSession.__init__(target='', graph=None, config=None)` {#InteractiveSession.__init__} - -Creates a new interactive TensorFlow session. - -If no `graph` argument is specified when constructing the session, -the default graph will be launched in the session. If you are -using more than one graph (created with `tf.Graph()` in the same -process, you will have to use different sessions for each graph, -but each graph can be used in multiple sessions. In this case, it -is often clearer to pass the graph to be launched explicitly to -the session constructor. - -##### Args: - - -* `target`: (Optional.) The execution engine to connect to. - Defaults to using an in-process engine. At present, no value - other than the empty string is supported. -* `graph`: (Optional.) The `Graph` to be launched (described above). -* `config`: (Optional) `ConfigProto` proto used to configure the session. - - -- - - - -#### `tf.InteractiveSession.close()` {#InteractiveSession.close} - -Closes an `InteractiveSession`. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.OpError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.OpError.md deleted file mode 100644 index c23014ad17..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.OpError.md +++ /dev/null @@ -1,62 +0,0 @@ -A generic error that is raised when TensorFlow execution fails. - -Whenever possible, the session will raise a more specific subclass -of `OpError` from the `tf.errors` module. - -- - - - -#### `tf.OpError.op` {#OpError.op} - -The operation that failed, if known. - -*N.B.* If the failed op was synthesized at runtime, e.g. a `Send` -or `Recv` op, there will be no corresponding -[`Operation`](../../api_docs/python/framework.md#Operation) -object. In that case, this will return `None`, and you should -instead use the [`OpError.node_def`](#OpError.node_def) to -discover information about the op. - -##### Returns: - - The `Operation` that failed, or None. - - -- - - - -#### `tf.OpError.node_def` {#OpError.node_def} - -The `NodeDef` proto representing the op that failed. - - - -#### Other Methods -- - - - -#### `tf.OpError.__init__(node_def, op, message, error_code)` {#OpError.__init__} - -Creates a new `OpError` indicating that a particular op failed. - -##### Args: - - -* `node_def`: The `graph_pb2.NodeDef` proto representing the op that failed, - if known; otherwise None. -* `op`: The `ops.Operation` that failed, if known; otherwise None. -* `message`: The message string describing the failure. -* `error_code`: The `error_codes_pb2.Code` describing the error. - - -- - - - -#### `tf.OpError.error_code` {#OpError.error_code} - -The integer error code that describes the error. - - -- - - - -#### `tf.OpError.message` {#OpError.message} - -The error message that describes the error. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.Operation.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.Operation.md deleted file mode 100644 index a9e21fb29e..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.Operation.md +++ /dev/null @@ -1,225 +0,0 @@ -Represents a graph node that performs computation on tensors. - -An `Operation` is a node in a TensorFlow `Graph` that takes zero or -more `Tensor` objects as input, and produces zero or more `Tensor` -objects as output. Objects of type `Operation` are created by -calling a Python op constructor (such as -[`tf.matmul()`](../../api_docs/python/math_ops.md#matmul)) -or [`Graph.create_op()`](../../api_docs/python/framework.md#Graph.create_op). - -For example `c = tf.matmul(a, b)` creates an `Operation` of type -"MatMul" that takes tensors `a` and `b` as input, and produces `c` -as output. - -After the graph has been launched in a session, an `Operation` can -be executed by passing it to -[`Session.run()`](../../api_docs/python/client.md#Session.run). -`op.run()` is a shortcut for calling `tf.get_default_session().run(op)`. - -- - - - -#### `tf.Operation.name` {#Operation.name} - -The full name of this operation. - - -- - - - -#### `tf.Operation.type` {#Operation.type} - -The type of the op (e.g. `"MatMul"`). - - -- - - - -#### `tf.Operation.inputs` {#Operation.inputs} - -The list of `Tensor` objects representing the data inputs of this op. - - -- - - - -#### `tf.Operation.control_inputs` {#Operation.control_inputs} - -The `Operation` objects on which this op has a control dependency. - -Before this op is executed, TensorFlow will ensure that the -operations in `self.control_inputs` have finished executing. This -mechanism can be used to run ops sequentially for performance -reasons, or to ensure that the side effects of an op are observed -in the correct order. - -##### Returns: - - A list of `Operation` objects. - - -- - - - -#### `tf.Operation.outputs` {#Operation.outputs} - -The list of `Tensor` objects representing the outputs of this op. - - -- - - - -#### `tf.Operation.device` {#Operation.device} - -The name of the device to which this op has been assigned, if any. - -##### Returns: - - The string name of the device to which this op has been - assigned, or an empty string if it has not been assigned to a - device. - - -- - - - -#### `tf.Operation.graph` {#Operation.graph} - -The `Graph` that contains this operation. - - - -- - - - -#### `tf.Operation.run(feed_dict=None, session=None)` {#Operation.run} - -Runs this operation in a `Session`. - -Calling this method will execute all preceding operations that -produce the inputs needed for this operation. - -*N.B.* Before invoking `Operation.run()`, its graph must have been -launched in a session, and either a default session must be -available, or `session` must be specified explicitly. - -##### Args: - - -* `feed_dict`: A dictionary that maps `Tensor` objects to feed values. - See [`Session.run()`](../../api_docs/python/client.md#Session.run) - for a description of the valid feed values. -* `session`: (Optional.) The `Session` to be used to run to this operation. If - none, the default session will be used. - - - -- - - - -#### `tf.Operation.get_attr(name)` {#Operation.get_attr} - -Returns the value of the attr of this op with the given `name`. - -##### Args: - - -* `name`: The name of the attr to fetch. - -##### Returns: - - The value of the attr, as a Python object. - -##### Raises: - - -* `ValueError`: If this op does not have an attr with the given `name`. - - -- - - - -#### `tf.Operation.traceback` {#Operation.traceback} - -Returns the call stack from when this operation was constructed. - - - -#### Other Methods -- - - - -#### `tf.Operation.__init__(node_def, g, inputs=None, output_types=None, control_inputs=None, input_types=None, original_op=None, op_def=None)` {#Operation.__init__} - -Creates an `Operation`. - -NOTE: This constructor validates the name of the `Operation` (passed -as `node_def.name`). Valid `Operation` names match the following -regular expression: - - [A-Za-z0-9.][A-Za-z0-9_.\-/]* - -##### Args: - - -* `node_def`: `graph_pb2.NodeDef`. `NodeDef` for the `Operation`. - Used for attributes of `graph_pb2.NodeDef`, typically `name`, - `op`, and `device`. The `input` attribute is irrelevant here - as it will be computed when generating the model. -* `g`: `Graph`. The parent graph. -* `inputs`: list of `Tensor` objects. The inputs to this `Operation`. -* `output_types`: list of `DType` objects. List of the types of the - `Tensors` computed by this operation. The length of this list indicates - the number of output endpoints of the `Operation`. -* `control_inputs`: list of operations or tensors from which to have a - control dependency. -* `input_types`: List of `DType` objects representing the - types of the tensors accepted by the `Operation`. By default - uses `[x.dtype.base_dtype for x in inputs]`. Operations that expect - reference-typed inputs must specify these explicitly. -* `original_op`: Optional. Used to associate the new `Operation` with an - existing `Operation` (for example, a replica with the op that was - replicated). -* `op_def`: Optional. The `op_def_pb2.OpDef` proto that describes the - op type that this `Operation` represents. - -##### Raises: - - -* `TypeError`: if control inputs are not Operations or Tensors, - or if `node_def` is not a `NodeDef`, - or if `g` is not a `Graph`, - or if `inputs` are not tensors, - or if `inputs` and `input_types` are incompatible. -* `ValueError`: if the `node_def` name is not valid. - - -- - - - -#### `tf.Operation.colocation_groups()` {#Operation.colocation_groups} - -Returns the list of colocation groups of the op. - - -- - - - -#### `tf.Operation.node_def` {#Operation.node_def} - -Returns a serialized `NodeDef` representation of this operation. - -##### Returns: - - A - [`NodeDef`](https://www.tensorflow.org/code/tensorflow/core/framework/graph.proto) - protocol buffer. - - -- - - - -#### `tf.Operation.op_def` {#Operation.op_def} - -Returns the `OpDef` proto that represents the type of this op. - -##### Returns: - - An - [`OpDef`](https://www.tensorflow.org/code/tensorflow/core/framework/op_def.proto) - protocol buffer. - - -- - - - -#### `tf.Operation.values()` {#Operation.values} - -DEPRECATED: Use outputs. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.Print.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.Print.md deleted file mode 100644 index b1ec7c1af0..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.Print.md +++ /dev/null @@ -1,23 +0,0 @@ -### `tf.Print(input_, data, message=None, first_n=None, summarize=None, name=None)` {#Print} - -Prints a list of tensors. - -This is an identity op with the side effect of printing `data` when -evaluating. - -##### Args: - - -* `input_`: A tensor passed through this op. -* `data`: A list of tensors to print out when op is evaluated. -* `message`: A string, prefix of the error message. -* `first_n`: Only log `first_n` number of times. Negative numbers log always; - this is the default. -* `summarize`: Only print this many entries of each tensor. If None, then a - maximum of 3 elements are printed per input tensor. -* `name`: A name for the operation (optional). - -##### Returns: - - Same tensor as `input_`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.ReaderBase.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.ReaderBase.md deleted file mode 100644 index bc9f62de5a..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.ReaderBase.md +++ /dev/null @@ -1,156 +0,0 @@ -Base class for different Reader types, that produce a record every step. - -Conceptually, Readers convert string 'work units' into records (key, -value pairs). Typically the 'work units' are filenames and the -records are extracted from the contents of those files. We want a -single record produced per step, but a work unit can correspond to -many records. - -Therefore we introduce some decoupling using a queue. The queue -contains the work units and the Reader dequeues from the queue when -it is asked to produce a record (via Read()) but it has finished the -last work unit. -- - - - -#### `tf.ReaderBase.__init__(reader_ref, supports_serialize=False)` {#ReaderBase.__init__} - -Creates a new ReaderBase. - -##### Args: - - -* `reader_ref`: The operation that implements the reader. -* `supports_serialize`: True if the reader implementation can - serialize its state. - - -- - - - -#### `tf.ReaderBase.num_records_produced(name=None)` {#ReaderBase.num_records_produced} - -Returns the number of records this reader has produced. - -This is the same as the number of Read executions that have -succeeded. - -##### Args: - - -* `name`: A name for the operation (optional). - -##### Returns: - - An int64 Tensor. - - -- - - - -#### `tf.ReaderBase.num_work_units_completed(name=None)` {#ReaderBase.num_work_units_completed} - -Returns the number of work units this reader has finished processing. - -##### Args: - - -* `name`: A name for the operation (optional). - -##### Returns: - - An int64 Tensor. - - -- - - - -#### `tf.ReaderBase.read(queue, name=None)` {#ReaderBase.read} - -Returns the next record (key, value pair) produced by a reader. - -Will dequeue a work unit from queue if necessary (e.g. when the -Reader needs to start reading from a new file since it has -finished with the previous file). - -##### Args: - - -* `queue`: A Queue or a mutable string Tensor representing a handle - to a Queue, with string work items. -* `name`: A name for the operation (optional). - -##### Returns: - - A tuple of Tensors (key, value). - -* `key`: A string scalar Tensor. -* `value`: A string scalar Tensor. - - -- - - - -#### `tf.ReaderBase.reader_ref` {#ReaderBase.reader_ref} - -Op that implements the reader. - - -- - - - -#### `tf.ReaderBase.reset(name=None)` {#ReaderBase.reset} - -Restore a reader to its initial clean state. - -##### Args: - - -* `name`: A name for the operation (optional). - -##### Returns: - - The created Operation. - - -- - - - -#### `tf.ReaderBase.restore_state(state, name=None)` {#ReaderBase.restore_state} - -Restore a reader to a previously saved state. - -Not all Readers support being restored, so this can produce an -Unimplemented error. - -##### Args: - - -* `state`: A string Tensor. - Result of a SerializeState of a Reader with matching type. -* `name`: A name for the operation (optional). - -##### Returns: - - The created Operation. - - -- - - - -#### `tf.ReaderBase.serialize_state(name=None)` {#ReaderBase.serialize_state} - -Produce a string tensor that encodes the state of a reader. - -Not all Readers support being serialized, so this can produce an -Unimplemented error. - -##### Args: - - -* `name`: A name for the operation (optional). - -##### Returns: - - A string Tensor. - - -- - - - -#### `tf.ReaderBase.supports_serialize` {#ReaderBase.supports_serialize} - -Whether the Reader implementation can serialize its state. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.RegisterShape.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.RegisterShape.md new file mode 100644 index 0000000000..e3bb956f88 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.RegisterShape.md @@ -0,0 +1,27 @@ +A decorator for registering the shape function for an op type. + +This decorator is only used when defining a new op type. A shape +function is a function from an `Operation` object to a list of +`TensorShape` objects, with one `TensorShape` for each output of the +operation. + +For example, assuming that operations of type `"Sub"` take two +inputs `x` and `y`, and return a single output `x - y`, all with the +same shape, the following shape function would be registered: + +```python +@tf.RegisterShape("Sub") +def _sub_shape(op): + return [op.inputs[0].get_shape().merge_with(op.inputs[1].get_shape())] +``` + +The decorator argument `op_type` is the string type of an +operation. This corresponds to the `OpDef.name` field for the proto +that defines the operation. +- - - + +#### `tf.RegisterShape.__init__(op_type)` {#RegisterShape.__init__} + +Saves the `op_type` as the `Operation` type. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.TensorShape.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.TensorShape.md new file mode 100644 index 0000000000..506f44d838 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.TensorShape.md @@ -0,0 +1,316 @@ +Represents the shape of a `Tensor`. + +A `TensorShape` represents a possibly-partial shape specification for a +`Tensor`. It may be one of the following: + +* *Fully-known shape:* has a known number of dimensions and a known size + for each dimension. +* *Partially-known shape:* has a known number of dimensions, and an unknown + size for one or more dimension. +* *Unknown shape:* has an unknown number of dimensions, and an unknown + size in all dimensions. + +If a tensor is produced by an operation of type `"Foo"`, its shape +may be inferred if there is a registered shape function for +`"Foo"`. See [`tf.RegisterShape()`](../../api_docs/python/framework.md#RegisterShape) +for details of shape +functions and how to register them. Alternatively, the shape may be set +explicitly using [`Tensor.set_shape()`](../../api_docs/python/framework.md#Tensor.set_shape). + +- - - + +#### `tf.TensorShape.merge_with(other)` {#TensorShape.merge_with} + +Returns a `TensorShape` combining the information in `self` and `other`. + +The dimensions in `self` and `other` are merged elementwise, +according to the rules defined for `Dimension.merge_with()`. + +##### Args: + + +* `other`: Another `TensorShape`. + +##### Returns: + + A `TensorShape` containing the combined information of `self` and + `other`. + +##### Raises: + + +* `ValueError`: If `self` and `other` are not compatible. + + +- - - + +#### `tf.TensorShape.concatenate(other)` {#TensorShape.concatenate} + +Returns the concatenation of the dimension in `self` and `other`. + +*N.B.* If either `self` or `other` is completely unknown, +concatenation will discard information about the other shape. In +future, we might support concatenation that preserves this +information for use with slicing. + +##### Args: + + +* `other`: Another `TensorShape`. + +##### Returns: + + A `TensorShape` whose dimensions are the concatenation of the + dimensions in `self` and `other`. + + + +- - - + +#### `tf.TensorShape.ndims` {#TensorShape.ndims} + +Returns the rank of this shape, or None if it is unspecified. + + +- - - + +#### `tf.TensorShape.dims` {#TensorShape.dims} + +Returns a list of Dimensions, or None if the shape is unspecified. + + +- - - + +#### `tf.TensorShape.as_list()` {#TensorShape.as_list} + +Returns a list of integers or None for each dimension. + +##### Returns: + + A list of integers or None for each dimension. + + +- - - + +#### `tf.TensorShape.as_proto()` {#TensorShape.as_proto} + +Returns this shape as a `TensorShapeProto`. + + +- - - + +#### `tf.TensorShape.is_compatible_with(other)` {#TensorShape.is_compatible_with} + +Returns True iff `self` is compatible with `other`. + +Two possibly-partially-defined shapes are compatible if there +exists a fully-defined shape that both shapes can represent. Thus, +compatibility allows the shape inference code to reason about +partially-defined shapes. For example: + +* TensorShape(None) is compatible with all shapes. + +* TensorShape([None, None]) is compatible with all two-dimensional + shapes, such as TensorShape([32, 784]), and also TensorShape(None). It is + not compatible with, for example, TensorShape([None]) or + TensorShape([None, None, None]). + +* TensorShape([32, None]) is compatible with all two-dimensional shapes + with size 32 in the 0th dimension, and also TensorShape([None, None]) + and TensorShape(None). It is not compatible with, for example, + TensorShape([32]), TensorShape([32, None, 1]) or TensorShape([64, None]). + +* TensorShape([32, 784]) is compatible with itself, and also + TensorShape([32, None]), TensorShape([None, 784]), TensorShape([None, + None]) and TensorShape(None). It is not compatible with, for example, + TensorShape([32, 1, 784]) or TensorShape([None]). + +The compatibility relation is reflexive and symmetric, but not +transitive. For example, TensorShape([32, 784]) is compatible with +TensorShape(None), and TensorShape(None) is compatible with +TensorShape([4, 4]), but TensorShape([32, 784]) is not compatible with +TensorShape([4, 4]). + +##### Args: + + +* `other`: Another TensorShape. + +##### Returns: + + True iff `self` is compatible with `other`. + + +- - - + +#### `tf.TensorShape.is_fully_defined()` {#TensorShape.is_fully_defined} + +Returns True iff `self` is fully defined in every dimension. + + + +- - - + +#### `tf.TensorShape.with_rank(rank)` {#TensorShape.with_rank} + +Returns a shape based on `self` with the given rank. + +This method promotes a completely unknown shape to one with a +known rank. + +##### Args: + + +* `rank`: An integer. + +##### Returns: + + A shape that is at least as specific as `self` with the given rank. + +##### Raises: + + +* `ValueError`: If `self` does not represent a shape with the given `rank`. + + +- - - + +#### `tf.TensorShape.with_rank_at_least(rank)` {#TensorShape.with_rank_at_least} + +Returns a shape based on `self` with at least the given rank. + +##### Args: + + +* `rank`: An integer. + +##### Returns: + + A shape that is at least as specific as `self` with at least the given + rank. + +##### Raises: + + +* `ValueError`: If `self` does not represent a shape with at least the given + `rank`. + + +- - - + +#### `tf.TensorShape.with_rank_at_most(rank)` {#TensorShape.with_rank_at_most} + +Returns a shape based on `self` with at most the given rank. + +##### Args: + + +* `rank`: An integer. + +##### Returns: + + A shape that is at least as specific as `self` with at most the given + rank. + +##### Raises: + + +* `ValueError`: If `self` does not represent a shape with at most the given + `rank`. + + + +- - - + +#### `tf.TensorShape.assert_has_rank(rank)` {#TensorShape.assert_has_rank} + +Raises an exception if `self` is not compatible with the given `rank`. + +##### Args: + + +* `rank`: An integer. + +##### Raises: + + +* `ValueError`: If `self` does not represent a shape with the given `rank`. + + +- - - + +#### `tf.TensorShape.assert_same_rank(other)` {#TensorShape.assert_same_rank} + +Raises an exception if `self` and `other` do not have compatible ranks. + +##### Args: + + +* `other`: Another `TensorShape`. + +##### Raises: + + +* `ValueError`: If `self` and `other` do not represent shapes with the + same rank. + + +- - - + +#### `tf.TensorShape.assert_is_compatible_with(other)` {#TensorShape.assert_is_compatible_with} + +Raises exception if `self` and `other` do not represent the same shape. + +This method can be used to assert that there exists a shape that both +`self` and `other` represent. + +##### Args: + + +* `other`: Another TensorShape. + +##### Raises: + + +* `ValueError`: If `self` and `other` do not represent the same shape. + + +- - - + +#### `tf.TensorShape.assert_is_fully_defined()` {#TensorShape.assert_is_fully_defined} + +Raises an exception if `self` is not fully defined in every dimension. + +##### Raises: + + +* `ValueError`: If `self` does not have a known value for every dimension. + + + +#### Other Methods +- - - + +#### `tf.TensorShape.__init__(dims)` {#TensorShape.__init__} + +Creates a new TensorShape with the given dimensions. + +##### Args: + + +* `dims`: A list of Dimensions, or None if the shape is unspecified. +* `DEPRECATED`: A single integer is treated as a singleton list. + +##### Raises: + + +* `TypeError`: If dims cannot be converted to a list of dimensions. + + +- - - + +#### `tf.TensorShape.num_elements()` {#TensorShape.num_elements} + +Returns the total number of elements, or none for incomplete shapes. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.VarLenFeature.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.VarLenFeature.md new file mode 100644 index 0000000000..a7b49bfcd6 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.VarLenFeature.md @@ -0,0 +1,11 @@ +Configuration for parsing a variable-length input feature. + +Fields: + dtype: Data type of input. +- - - + +#### `tf.VarLenFeature.dtype` {#VarLenFeature.dtype} + +Alias for field number 0 + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.abs.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.abs.md new file mode 100644 index 0000000000..63a0b4c954 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.abs.md @@ -0,0 +1,22 @@ +### `tf.abs(x, name=None)` {#abs} + +Computes the absolute value of a tensor. + +Given a tensor of real numbers `x`, this operation returns a tensor +containing the absolute value of each element in `x`. For example, if x is +an input element and y is an output element, this operation computes +\\(y = |x|\\). + +See [`tf.complex_abs()`](#tf_complex_abs) to compute the absolute value of a complex +number. + +##### Args: + + +* `x`: A `Tensor` of type `float`, `double`, `int32`, or `int64`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` the same size and type as `x` with absolute values. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.add.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.add.md deleted file mode 100644 index 738f0337d3..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.add.md +++ /dev/null @@ -1,17 +0,0 @@ -### `tf.add(x, y, name=None)` {#add} - -Returns x + y element-wise. - -*NOTE*: Add supports broadcasting. AddN does not. - -##### Args: - - -* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`, `string`. -* `y`: A `Tensor`. Must have the same type as `x`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `x`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.all_variables.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.all_variables.md deleted file mode 100644 index 904b99f321..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.all_variables.md +++ /dev/null @@ -1,12 +0,0 @@ -### `tf.all_variables()` {#all_variables} - -Returns all variables that must be saved/restored. - -The `Variable()` constructor automatically adds new variables to the graph -collection `GraphKeys.VARIABLES`. This convenience function returns the -contents of that collection. - -##### Returns: - - A list of `Variable` objects. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.as_dtype.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.as_dtype.md new file mode 100644 index 0000000000..50a048aacb --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.as_dtype.md @@ -0,0 +1,21 @@ +### `tf.as_dtype(type_value)` {#as_dtype} + +Converts the given `type_value` to a `DType`. + +##### Args: + + +* `type_value`: A value that can be converted to a `tf.DType` + object. This may currently be a `tf.DType` object, a + [`DataType` enum](https://www.tensorflow.org/code/tensorflow/core/framework/types.proto), + a string type name, or a `numpy.dtype`. + +##### Returns: + + A `DType` corresponding to `type_value`. + +##### Raises: + + +* `TypeError`: If `type_value` cannot be converted to a `DType`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.assert_less.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.assert_less.md new file mode 100644 index 0000000000..eb43a62444 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.assert_less.md @@ -0,0 +1,35 @@ +### `tf.assert_less(x, y, data=None, summarize=None, name=None)` {#assert_less} + +Assert the condition `x < y` holds element-wise. + +Example of adding a dependency to an operation: + +```python +with tf.control_dependencies([tf.assert_less(x, y)]): + output = tf.reduce_sum(x) +``` + +Example of adding dependency to the tensor being checked: + +```python +x = tf.with_dependencies([tf.assert_less(x, y)], x) +``` + +This condition holds if for every pair of (possibly broadcast) elements +`x[i]`, `y[i]`, we have `x[i] < y[i]`. +If both `x` and `y` are empty, this is trivially satisfied. + +##### Args: + + +* `x`: Numeric `Tensor`. +* `y`: Numeric `Tensor`, same dtype as and broadcastable to `x`. +* `data`: The tensors to print out if the condition is False. Defaults to + error message and first few entries of `x`, `y`. +* `summarize`: Print this many entries of each tensor. +* `name`: A name for this operation (optional). Defaults to "assert_less". + +##### Returns: + + Op that raises `InvalidArgumentError` if `x < y` is False. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.batch_cholesky_solve.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.batch_cholesky_solve.md new file mode 100644 index 0000000000..25fcc5c908 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.batch_cholesky_solve.md @@ -0,0 +1,35 @@ +### `tf.batch_cholesky_solve(chol, rhs, name=None)` {#batch_cholesky_solve} + +Solve batches of linear eqns `A X = RHS`, given Cholesky factorizations. + +```python +# Solve one linear system (K = 1) for every member of the length 10 batch. +A = ... # shape 10 x 2 x 2 +RHS = ... # shape 10 x 2 x 1 +chol = tf.batch_cholesky(A) # shape 10 x 2 x 2 +X = tf.batch_cholesky_solve(chol, RHS) # shape 10 x 2 x 1 +# tf.matmul(A, X) ~ RHS +X[3, :, 0] # Solution to the linear system A[3, :, :] x = RHS[3, :, 0] + +# Solve five linear systems (K = 5) for every member of the length 10 batch. +A = ... # shape 10 x 2 x 2 +RHS = ... # shape 10 x 2 x 5 +... +X[3, :, 2] # Solution to the linear system A[3, :, :] x = RHS[3, :, 2] +``` + +##### Args: + + +* `chol`: A `Tensor`. Must be `float32` or `float64`, shape is `[..., M, M]`. + Cholesky factorization of `A`, e.g. `chol = tf.batch_cholesky(A)`. + For that reason, only the lower triangular parts (including the diagonal) + of the last two dimensions of `chol` are used. The strictly upper part is + assumed to be zero and not accessed. +* `rhs`: A `Tensor`, same type as `chol`, shape is `[..., M, K]`. +* `name`: A name to give this `Op`. Defaults to `batch_cholesky_solve`. + +##### Returns: + + Solution to `A x = rhs`, shape `[..., M, K]`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.batch_fft.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.batch_fft.md deleted file mode 100644 index c2ea3aa9c1..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.batch_fft.md +++ /dev/null @@ -1,18 +0,0 @@ -### `tf.batch_fft(input, name=None)` {#batch_fft} - -Compute the 1-dimensional discrete Fourier Transform over the inner-most - -dimension of `input`. - -##### Args: - - -* `input`: A `Tensor` of type `complex64`. A complex64 tensor. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `complex64`. - A complex64 tensor of the same shape as `input`. The inner-most - dimension of `input` is replaced with its 1D Fourier Transform. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.batch_ifft3d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.batch_ifft3d.md new file mode 100644 index 0000000000..1173a17d6d --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.batch_ifft3d.md @@ -0,0 +1,18 @@ +### `tf.batch_ifft3d(input, name=None)` {#batch_ifft3d} + +Compute the inverse 3-dimensional discrete Fourier Transform over the inner-most + +3 dimensions of `input`. + +##### Args: + + +* `input`: A `Tensor` of type `complex64`. A complex64 tensor. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `complex64`. + A complex64 tensor of the same shape as `input`. The inner-most 3 + dimensions of `input` are replaced with their inverse 3D Fourier Transform. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.batch_matrix_diag_part.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.batch_matrix_diag_part.md deleted file mode 100644 index 0eb431d7a9..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.batch_matrix_diag_part.md +++ /dev/null @@ -1,46 +0,0 @@ -### `tf.batch_matrix_diag_part(input, name=None)` {#batch_matrix_diag_part} - -Returns the batched diagonal part of a batched tensor. - -This operation returns a tensor with the `diagonal` part -of the batched `input`. The `diagonal` part is computed as follows: - -Assume `input` has `k` dimensions `[I, J, K, ..., N, N]`, then the output is a -tensor of rank `k - 1` with dimensions `[I, J, K, ..., N]` where: - -`diagonal[i, j, k, ..., n] = input[i, j, k, ..., n, n]`. - -The input must be at least a matrix. - -For example: - -```prettyprint -# 'input' is [[[1, 0, 0, 0] - [0, 2, 0, 0] - [0, 0, 3, 0] - [0, 0, 0, 4]], - [[5, 0, 0, 0] - [0, 6, 0, 0] - [0, 0, 7, 0] - [0, 0, 0, 8]]] - -and input.shape = (2, 4, 4) - -tf.batch_matrix_diag_part(input) ==> [[1, 2, 3, 4], [5, 6, 7, 8]] - -which has shape (2, 4) -``` - -##### Args: - - -* `input`: A `Tensor`. - Rank `k` tensor where `k >= 2` and the last two dimensions are equal. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `input`. - The extracted diagonal(s) having shape - `diagonal.shape = input.shape[:-1]`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.bitcast.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.bitcast.md new file mode 100644 index 0000000000..4ded707a89 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.bitcast.md @@ -0,0 +1,25 @@ +### `tf.bitcast(input, type, name=None)` {#bitcast} + +Bitcasts a tensor from one type to another without copying data. + +Given a tensor `input`, this operation returns a tensor that has the same buffer +data as `input` with datatype `type`. + +If the input datatype `T` is larger than the output datatype `type` then the +shape changes from [...] to [..., sizeof(`T`)/sizeof(`type`)]. + +If `T` is smaller than `type`, the operator requires that the rightmost +dimension be equal to sizeof(`type`)/sizeof(`T`). The shape then goes from +[..., sizeof(`type`)/sizeof(`T`)] to [...]. + +##### Args: + + +* `input`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`. +* `type`: A `tf.DType` from: `tf.float32, tf.float64, tf.int64, tf.int32, tf.uint8, tf.uint16, tf.int16, tf.int8, tf.complex64, tf.complex128, tf.qint8, tf.quint8, tf.qint32, tf.half`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `type`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.boolean_mask.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.boolean_mask.md new file mode 100644 index 0000000000..e893b8ee63 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.boolean_mask.md @@ -0,0 +1,43 @@ +### `tf.boolean_mask(tensor, mask, name='boolean_mask')` {#boolean_mask} + +Apply boolean mask to tensor. Numpy equivalent is `tensor[mask]`. + +```python +# 1-D example +tensor = [0, 1, 2, 3] +mask = [True, False, True, False] +boolean_mask(tensor, mask) ==> [0, 2] +``` + +In general, `0 < dim(mask) = K <= dim(tensor)`, and `mask`'s shape must match +the first K dimensions of `tensor`'s shape. We then have: + `boolean_mask(tensor, mask)[i, j1,...,jd] = tensor[i1,...,iK,j1,...,jd]` +where `(i1,...,iK)` is the ith `True` entry of `mask` (row-major order). + +##### Args: + + +* `tensor`: N-D tensor. +* `mask`: K-D boolean tensor, K <= N and K must be known statically. +* `name`: A name for this operation (optional). + +##### Returns: + + Tensor populated by entries in `tensor` corresponding to `True` values in + `mask`. + +##### Raises: + + +* `ValueError`: If shapes do not conform. + + +* `Examples`: + +```python +# 2-D example +tensor = [[1, 2], [3, 4], [5, 6]] +mask = [True, False, True] +boolean_mask(tensor, mask) ==> [[1, 2], [5, 6]] +``` + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.case.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.case.md deleted file mode 100644 index 9314837b8e..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.case.md +++ /dev/null @@ -1,75 +0,0 @@ -### `tf.case(pred_fn_pairs, default, exclusive=False, name='case')` {#case} - -Create a case operation. - -The `pred_fn_pairs` parameter is a dict or list of pairs of size N. -Each pair contains a boolean scalar tensor and a python callable that -creates the tensors to be returned if the boolean evaluates to True. -`default` is a callable generating a list of tensors. All the callables -in `pred_fn_pairs` as well as `default` should return the same number -and types of tensors. - -If `exclusive==True`, all predicates are evaluated, and a logging operation -with an error is returned if more than one of the predicates evaluates to -True. If `exclusive==False`, execution stops are the first predicate which -evaluates to True, and the tensors generated by the corresponding function -are returned immediately. If none of the predicates evaluate to True, this -operation returns the tensors generated by `default`. - -Example 1: - Pseudocode: - ``` - if (x < y) return 17; - else return 23; - ``` - - Expressions: - ``` - f1 = lambda: tf.constant(17) - f2 = lambda: tf.constant(23) - r = case([(tf.less(x, y), f1)], default=f2) - ``` - -Example 2: - Pseudocode: - ``` - if (x < y && x > z) raise OpError("Only one predicate may evaluate true"); - if (x < y) return 17; - else if (x > z) return 23; - else return -1; - ``` - - Expressions: - ``` - x = tf.constant(0) - y = tf.constant(1) - z = tf.constant(2) - def f1(): return tf.constant(17) - def f2(): return tf.constant(23) - def f3(): return tf.constant(-1) - r = case({tf.less(x, y): f1, tf.greater(x, z): f2}, - default=f3, exclusive=True) - ``` - -##### Args: - - -* `pred_fn_pairs`: Dict or list of pairs of a boolean scalar tensor and a - callable which returns a list of tensors. -* `default`: A callable that returns a list of tensors. -* `exclusive`: True iff more than one predicate is allowed to evaluate to True. -* `name`: A name for this operation (optional). - -##### Returns: - - The tensors returned by the first pair whose predicate evaluated to True, or - those returned by `default` if none does. - -##### Raises: - - -* `TypeError`: If `pred_fn_pairs` is not a list/dictionary. -* `TypeError`: If `pred_fn_pairs` is a list but does not contain 2-tuples. -* `TypeError`: If `fns[i]` is not callable for any i, or `default` is not - callable. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.cholesky.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.cholesky.md deleted file mode 100644 index 4032b80d8e..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.cholesky.md +++ /dev/null @@ -1,22 +0,0 @@ -### `tf.cholesky(input, name=None)` {#cholesky} - -Calculates the Cholesky decomposition of a square matrix. - -The input has to be symmetric and positive definite. Only the lower-triangular -part of the input will be used for this operation. The upper-triangular part -will not be read. - -The result is the lower-triangular matrix of the Cholesky decomposition of the -input, `L`, so that `input = L L^*`. - -##### Args: - - -* `input`: A `Tensor`. Must be one of the following types: `float64`, `float32`. - Shape is `[M, M]`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `input`. Shape is `[M, M]`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.copy_graph.get_copied_op.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.copy_graph.get_copied_op.md deleted file mode 100644 index 9e5a2118fd..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.copy_graph.get_copied_op.md +++ /dev/null @@ -1,18 +0,0 @@ -### `tf.contrib.copy_graph.get_copied_op(org_instance, graph, scope='')` {#get_copied_op} - -Given an `Operation` instance from some `Graph`, returns -its namesake from `graph`, under the specified scope -(default `""`). - -If a copy of `org_instance` is present in `graph` under the given -`scope`, it will be returned. - -Args: -org_instance: An `Operation` from some `Graph`. -graph: The `Graph` to be searched for a copr of `org_instance`. -scope: The scope `org_instance` is present in. - -##### Returns: - - The `Operation` copy from `graph`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.distributions.DirichletMultinomial.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.distributions.DirichletMultinomial.md new file mode 100644 index 0000000000..1d8cb6a6dd --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.distributions.DirichletMultinomial.md @@ -0,0 +1,185 @@ +DirichletMultinomial mixture distribution. + +This distribution is parameterized by a vector `alpha` of concentration +parameters for `k` classes. + +#### Mathematical details + +The Dirichlet Multinomial is a distribution over k-class count data, meaning +for each k-tuple of non-negative integer `counts = [c_1,...,c_k]`, we have a +probability of these draws being made from the distribution. The distribution +has hyperparameters `alpha = (alpha_1,...,alpha_k)`, and probability mass +function (pmf): + +```pmf(counts) = C! / (c_1!...c_k!) * Beta(alpha + c) / Beta(alpha)``` + +where above `C = sum_j c_j`, `N!` is `N` factorial, and +`Beta(x) = prod_j Gamma(x_j) / Gamma(sum_j x_j)` is the multivariate beta +function. + +This is a mixture distribution in that `N` samples can be produced by: + 1. Choose class probabilities `p = (p_1,...,p_k) ~ Dir(alpha)` + 2. Draw integers `m = (m_1,...,m_k) ~ Multinomial(p, N)` + +This class provides methods to create indexed batches of Dirichlet +Multinomial distributions. If the provided `alpha` is rank 2 or higher, for +every fixed set of leading dimensions, the last dimension represents one +single Dirichlet Multinomial distribution. When calling distribution +functions (e.g. `dist.pdf(counts)`), `alpha` and `counts` are broadcast to the +same shape (if possible). In all cases, the last dimension of alpha/counts +represents single Dirichlet Multinomial distributions. + +#### Examples + +```python +alpha = [1, 2, 3] +dist = DirichletMultinomial(alpha) +``` + +Creates a 3-class distribution, with the 3rd class is most likely to be drawn. +The distribution functions can be evaluated on counts. + +```python +# counts same shape as alpha. +counts = [0, 2, 0] +dist.pdf(counts) # Shape [] + +# alpha will be broadcast to [[1, 2, 3], [1, 2, 3]] to match counts. +counts = [[11, 22, 33], [44, 55, 66]] +dist.pdf(counts) # Shape [2] + +# alpha will be broadcast to shape [5, 7, 3] to match counts. +counts = [[...]] # Shape [5, 7, 3] +dist.pdf(counts) # Shape [5, 7] +``` + +Creates a 2-batch of 3-class distributions. + +```python +alpha = [[1, 2, 3], [4, 5, 6]] # Shape [2, 3] +dist = DirichletMultinomial(alpha) + +# counts will be broadcast to [[11, 22, 33], [11, 22, 33]] to match alpha. +counts = [11, 22, 33] +dist.pdf(counts) # Shape [2] +``` +- - - + +#### `tf.contrib.distributions.DirichletMultinomial.__init__(alpha)` {#DirichletMultinomial.__init__} + +Initialize a batch of DirichletMultinomial distributions. + +##### Args: + + +* `alpha`: Shape `[N1,..., Nn, k]` positive `float` or `double` tensor with + `n >= 0`. Defines this as a batch of `N1 x ... x Nn` different `k` + class Dirichlet multinomial distributions. + + +* `Examples`: + +```python +# Define 1-batch of 2-class Dirichlet multinomial distribution, +# also known as a beta-binomial. +dist = DirichletMultinomial([1.1, 2.0]) + +# Define a 2-batch of 3-class distributions. +dist = DirichletMultinomial([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]) +``` + + +- - - + +#### `tf.contrib.distributions.DirichletMultinomial.alpha` {#DirichletMultinomial.alpha} + +Parameters defining this distribution. + + +- - - + +#### `tf.contrib.distributions.DirichletMultinomial.cdf(x)` {#DirichletMultinomial.cdf} + + + + +- - - + +#### `tf.contrib.distributions.DirichletMultinomial.dtype` {#DirichletMultinomial.dtype} + + + + +- - - + +#### `tf.contrib.distributions.DirichletMultinomial.log_cdf(x)` {#DirichletMultinomial.log_cdf} + + + + +- - - + +#### `tf.contrib.distributions.DirichletMultinomial.log_pmf(counts, name=None)` {#DirichletMultinomial.log_pmf} + +`Log(P[counts])`, computed for every batch member. + +For each batch of counts `[c_1,...,c_k]`, `P[counts]` is the probability +that after sampling `sum_j c_j` draws from this Dirichlet Multinomial +distribution, the number of draws falling in class `j` is `c_j`. Note that +different sequences of draws can result in the same counts, thus the +probability includes a combinatorial coefficient. + +##### Args: + + +* `counts`: Non-negative `float`, `double`, or `int` tensor whose shape can + be broadcast with `self.alpha`. For fixed leading dimensions, the last + dimension represents counts for the corresponding Dirichlet Multinomial + distribution in `self.alpha`. +* `name`: Name to give this Op, defaults to "log_pmf". + +##### Returns: + + Log probabilities for each record, shape `[N1,...,Nn]`. + + +- - - + +#### `tf.contrib.distributions.DirichletMultinomial.mean` {#DirichletMultinomial.mean} + +Class means for every batch member. + + +- - - + +#### `tf.contrib.distributions.DirichletMultinomial.num_classes` {#DirichletMultinomial.num_classes} + +Tensor providing number of classes in each batch member. + + +- - - + +#### `tf.contrib.distributions.DirichletMultinomial.pmf(counts, name=None)` {#DirichletMultinomial.pmf} + +`P[counts]`, computed for every batch member. + +For each batch of counts `[c_1,...,c_k]`, `P[counts]` is the probability +that after sampling `sum_j c_j` draws from this Dirichlet Multinomial +distribution, the number of draws falling in class `j` is `c_j`. Note that +different sequences of draws can result in the same counts, thus the +probability includes a combinatorial coefficient. + +##### Args: + + +* `counts`: Non-negative `float`, `double`, or `int` tensor whose shape can + be broadcast with `self.alpha`. For fixed leading dimensions, the last + dimension represents counts for the corresponding Dirichlet Multinomial + distribution in `self.alpha`. +* `name`: Name to give this Op, defaults to "pmf". + +##### Returns: + + Probabilities for each record, shape `[N1,...,Nn]`. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.distributions.Normal.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.distributions.Normal.md new file mode 100644 index 0000000000..d15dd93a65 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.distributions.Normal.md @@ -0,0 +1,209 @@ +The scalar Normal distribution with mean and stddev parameters mu, sigma. + +#### Mathematical details + +The PDF of this distribution is: + +```f(x) = sqrt(1/(2*pi*sigma^2)) exp(-(x-mu)^2/(2*sigma^2))``` + +#### Examples + +Examples of initialization of one or a batch of distributions. + +```python +# Define a single scalar Normal distribution. +dist = tf.contrib.distributions.Normal(mu=0, sigma=3) + +# Evaluate the cdf at 1, returning a scalar. +dist.cdf(1) + +# Define a batch of two scalar valued Normals. +# The first has mean 1 and standard deviation 11, the second 2 and 22. +dist = tf.contrib.distributions.Normal(mu=[1, 2.], sigma=[11, 22.]) + +# Evaluate the pdf of the first distribution on 0, and the second on 1.5, +# returning a length two tensor. +dist.pdf([0, 1.5]) + +# Get 3 samples, returning a 3 x 2 tensor. +dist.sample(3) +``` + +Arguments are broadcast when possible. + +```python +# Define a batch of two scalar valued Normals. +# Both have mean 1, but different standard deviations. +dist = tf.contrib.distributions.Normal(mu=1, sigma=[11, 22.]) + +# Evaluate the pdf of both distributions on the same point, 3.0, +# returning a length 2 tensor. +dist.pdf(3.0) +``` +- - - + +#### `tf.contrib.distributions.Normal.__init__(mu, sigma, name=None)` {#Normal.__init__} + +Construct Normal distributions with mean and stddev `mu` and `sigma`. + +The parameters `mu` and `sigma` must be shaped in a way that supports +broadcasting (e.g. `mu + sigma` is a valid operation). + +##### Args: + + +* `mu`: `float` or `double` tensor, the means of the distribution(s). +* `sigma`: `float` or `double` tensor, the stddevs of the distribution(s). + sigma must contain only positive values. +* `name`: The name to give Ops created by the initializer. + +##### Raises: + + +* `TypeError`: if mu and sigma are different dtypes. + + +- - - + +#### `tf.contrib.distributions.Normal.cdf(x, name=None)` {#Normal.cdf} + +CDF of observations in `x` under these Normal distribution(s). + +##### Args: + + +* `x`: tensor of dtype `dtype`, must be broadcastable with `mu` and `sigma`. +* `name`: The name to give this op. + +##### Returns: + + +* `cdf`: tensor of dtype `dtype`, the CDFs of `x`. + + +- - - + +#### `tf.contrib.distributions.Normal.dtype` {#Normal.dtype} + + + + +- - - + +#### `tf.contrib.distributions.Normal.entropy(name=None)` {#Normal.entropy} + +The entropy of Normal distribution(s). + +##### Args: + + +* `name`: The name to give this op. + +##### Returns: + + +* `entropy`: tensor of dtype `dtype`, the entropy. + + +- - - + +#### `tf.contrib.distributions.Normal.is_reparameterized` {#Normal.is_reparameterized} + + + + +- - - + +#### `tf.contrib.distributions.Normal.log_cdf(x, name=None)` {#Normal.log_cdf} + +Log CDF of observations `x` under these Normal distribution(s). + +##### Args: + + +* `x`: tensor of dtype `dtype`, must be broadcastable with `mu` and `sigma`. +* `name`: The name to give this op. + +##### Returns: + + +* `log_cdf`: tensor of dtype `dtype`, the log-CDFs of `x`. + + +- - - + +#### `tf.contrib.distributions.Normal.log_pdf(x, name=None)` {#Normal.log_pdf} + +Log pdf of observations in `x` under these Normal distribution(s). + +##### Args: + + +* `x`: tensor of dtype `dtype`, must be broadcastable with `mu` and `sigma`. +* `name`: The name to give this op. + +##### Returns: + + +* `log_pdf`: tensor of dtype `dtype`, the log-PDFs of `x`. + + +- - - + +#### `tf.contrib.distributions.Normal.mean` {#Normal.mean} + + + + +- - - + +#### `tf.contrib.distributions.Normal.mu` {#Normal.mu} + + + + +- - - + +#### `tf.contrib.distributions.Normal.pdf(x, name=None)` {#Normal.pdf} + +The PDF of observations in `x` under these Normal distribution(s). + +##### Args: + + +* `x`: tensor of dtype `dtype`, must be broadcastable with `mu` and `sigma`. +* `name`: The name to give this op. + +##### Returns: + + +* `pdf`: tensor of dtype `dtype`, the pdf values of `x`. + + +- - - + +#### `tf.contrib.distributions.Normal.sample(n, seed=None, name=None)` {#Normal.sample} + +Sample `n` observations from the Normal Distributions. + +##### Args: + + +* `n`: `Scalar`, type int32, the number of observations to sample. +* `seed`: Python integer, the random seed. +* `name`: The name to give this op. + +##### Returns: + + +* `samples`: `[n, ...]`, a `Tensor` of `n` samples for each + of the distributions determined by broadcasting the hyperparameters. + + +- - - + +#### `tf.contrib.distributions.Normal.sigma` {#Normal.sigma} + + + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.ffmpeg.encode_audio.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.ffmpeg.encode_audio.md deleted file mode 100644 index fb9d958f26..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.ffmpeg.encode_audio.md +++ /dev/null @@ -1,19 +0,0 @@ -### `tf.contrib.ffmpeg.encode_audio(audio, file_format=None, samples_per_second=None)` {#encode_audio} - -Creates an op that encodes an audio file using sampled audio from a tensor. - -##### Args: - - -* `audio`: A rank 2 tensor that has time along dimension 0 and channels along - dimension 1. Dimension 0 is `samples_per_second * length` long in - seconds. -* `file_format`: The type of file to encode. "wav" is the only supported format. -* `samples_per_second`: The number of samples in the audio tensor per second of - audio. - -##### Returns: - - A scalar tensor that contains the encoded audio in the specified file - format. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.layers.optimize_loss.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.layers.optimize_loss.md deleted file mode 100644 index db0b01186a..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.layers.optimize_loss.md +++ /dev/null @@ -1,43 +0,0 @@ -### `tf.contrib.layers.optimize_loss(loss, global_step, learning_rate, optimizer, gradient_noise_scale=None, gradient_multipliers=None, clip_gradients=None, moving_average_decay=0.9, learning_rate_decay_fn=None, variables=None, name=None)` {#optimize_loss} - -Given loss and parameters for optimizer, returns a training op. - -##### Args: - - -* `loss`: Tensor, 0 dimensional. -* `global_step`: Tensor, step counter for each update. -* `learning_rate`: float or Tensor, magnitude of update per each training step. -* `optimizer`: string, class or optimizer instance, used as trainer. - string should be name of optimizer, like 'SGD', - 'Adam', 'Adagrad'. Full list in OPTIMIZER_CLS_NAMES constant. - class should be sub-class of tf.Optimizer that implements - `compute_gradients` and `apply_gradients` functions. - optimizer instance should be instantion of tf.Optimizer sub-class - and have `compute_gradients` and `apply_gradients` functions. -* `gradient_noise_scale`: float or None, adds 0-mean normal noise scaled by this - value. -* `gradient_multipliers`: dict of variables or variable names to floats. - If present, gradients for specified - variables will be multiplied by given constant. -* `clip_gradients`: float or `None`, clips gradients by this value. -* `moving_average_decay`: float or None, takes into account previous loss - to make learning smoother due to outliers. -* `learning_rate_decay_fn`: function, takes `learning_rate` and `global_step` - `Tensor`s, returns `Tensor`. - Can be used to implement any learning rate decay - functions. - For example: tf.train.exponential_decay. -* `variables`: list of variables to optimize or - `None` to use all trainable variables. -* `name`: The name for this operation is used to scope operations and summaries. - -##### Returns: - - Training op. - -##### Raises: - - -* `ValueError`: if optimizer is wrong type. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.BaseEstimator.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.BaseEstimator.md new file mode 100644 index 0000000000..034af231a1 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.BaseEstimator.md @@ -0,0 +1,189 @@ +Abstract BaseEstimator class to train and evaluate TensorFlow models. + +Concrete implementation of this class should provide following functions: + * _get_train_ops + * _get_eval_ops + * _get_predict_ops +It may override _get_default_metric_functions. + +`Estimator` implemented below is a good example of how to use this class. + +Parameters: + model_dir: Directory to save model parameters, graph and etc. +- - - + +#### `tf.contrib.learn.BaseEstimator.__init__(model_dir=None, config=None)` {#BaseEstimator.__init__} + + + + +- - - + +#### `tf.contrib.learn.BaseEstimator.evaluate(x=None, y=None, input_fn=None, feed_fn=None, batch_size=32, steps=None, metrics=None, name=None)` {#BaseEstimator.evaluate} + +Evaluates given model with provided evaluation data. + +##### Args: + + +* `x`: features. +* `y`: targets. +* `input_fn`: Input function. If set, x and y must be None. +* `feed_fn`: Function creating a feed dict every time it is called. Called + once per iteration. +* `batch_size`: minibatch size to use on the input, defaults to 32. Ignored + if input_fn is set. +* `steps`: Number of steps to evalute for. +* `metrics`: Dict of metric ops to run. If None, the default metric functions + are used; if {}, no metrics are used. +* `name`: Name of the evaluation if user needs to run multiple evaluation on + different data sets, such as evaluate on training data vs test data. + +##### Returns: + + Returns self. + +##### Raises: + + +* `ValueError`: If x or y are not None while input_fn or feed_fn is not None. + + +- - - + +#### `tf.contrib.learn.BaseEstimator.fit(x, y, steps, batch_size=32, monitors=None)` {#BaseEstimator.fit} + +Trains a model given training data X and y. + +##### Args: + + +* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be + iterator that returns arrays of features. The training input + samples for fitting the model. +* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be + iterator that returns array of targets. The training target values + (class labels in classification, real numbers in regression). +* `steps`: number of steps to train model for. +* `batch_size`: minibatch size to use on the input, defaults to 32. +* `monitors`: List of `BaseMonitor` subclass instances. Used for callbacks + inside the training loop. + +##### Returns: + + Returns self. + + +- - - + +#### `tf.contrib.learn.BaseEstimator.get_params(deep=True)` {#BaseEstimator.get_params} + +Get parameters for this estimator. + +##### Args: + + +* `deep`: boolean, optional + If True, will return the parameters for this estimator and + contained subobjects that are estimators. + +##### Returns: + + params : mapping of string to any + Parameter names mapped to their values. + + +- - - + +#### `tf.contrib.learn.BaseEstimator.model_dir` {#BaseEstimator.model_dir} + + + + +- - - + +#### `tf.contrib.learn.BaseEstimator.partial_fit(x, y, steps=1, batch_size=32, monitors=None)` {#BaseEstimator.partial_fit} + +Incremental fit on a batch of samples. + +This method is expected to be called several times consecutively +on different or the same chunks of the dataset. This either can +implement iterative training or out-of-core/online training. + +This is especially useful when the whole dataset is too big to +fit in memory at the same time. Or when model is taking long time +to converge, and you want to split up training into subparts. + +##### Args: + + +* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be + iterator that returns arrays of features. The training input + samples for fitting the model. +* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be + iterator that returns array of targets. The training target values + (class label in classification, real numbers in regression). +* `steps`: number of steps to train model for. +* `batch_size`: minibatch size to use on the input, defaults to 32. +* `monitors`: List of `BaseMonitor` subclass instances. Used for callbacks + inside the training loop. + +##### Returns: + + Returns self. + + +- - - + +#### `tf.contrib.learn.BaseEstimator.predict(x=None, input_fn=None, batch_size=None)` {#BaseEstimator.predict} + +Returns predictions for given features. + +##### Args: + + +* `x`: features. +* `input_fn`: Input function. If set, x must be None. +* `batch_size`: Override default batch size. + +##### Returns: + + Numpy array of predicted classes or regression values. + + +- - - + +#### `tf.contrib.learn.BaseEstimator.set_params(**params)` {#BaseEstimator.set_params} + +Set the parameters of this estimator. + +The method works on simple estimators as well as on nested objects +(such as pipelines). The former have parameters of the form +``__`` so that it's possible to update each +component of a nested object. + +##### Returns: + + self + + +- - - + +#### `tf.contrib.learn.BaseEstimator.train(input_fn, steps, monitors=None)` {#BaseEstimator.train} + +Trains a model given input builder function. + +##### Args: + + +* `input_fn`: Input builder function, returns tuple of dicts or + dict and Tensor. +* `steps`: number of steps to train model for. +* `monitors`: List of `BaseMonitor` subclass instances. Used for callbacks + inside the training loop. + +##### Returns: + + Returns self. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.TensorFlowRNNClassifier.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.TensorFlowRNNClassifier.md deleted file mode 100644 index 130b2706de..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.TensorFlowRNNClassifier.md +++ /dev/null @@ -1,312 +0,0 @@ -TensorFlow RNN Classifier model. - -Parameters: - rnn_size: The size for rnn cell, e.g. size of your word embeddings. - cell_type: The type of rnn cell, including rnn, gru, and lstm. - num_layers: The number of layers of the rnn model. - input_op_fn: Function that will transform the input tensor, such as - creating word embeddings, byte list, etc. This takes - an argument X for input and returns transformed X. - bidirectional: boolean, Whether this is a bidirectional rnn. - sequence_length: If sequence_length is provided, dynamic calculation is - performed. This saves computational time when unrolling past max sequence - length. - initial_state: An initial state for the RNN. This must be a tensor of - appropriate type and shape [batch_size x cell.state_size]. - n_classes: Number of classes in the target. - batch_size: Mini batch size. - steps: Number of steps to run over data. - optimizer: Optimizer name (or class), for example "SGD", "Adam", "Adagrad". - learning_rate: If this is constant float value, no decay function is - used. Instead, a customized decay function can be passed that accepts - global_step as parameter and returns a Tensor. - e.g. exponential decay function: - def exp_decay(global_step): - return tf.train.exponential_decay( - learning_rate=0.1, global_step, - decay_steps=2, decay_rate=0.001) - class_weight: None or list of n_classes floats. Weight associated with - classes for loss computation. If not given, all classes are - supposed to have weight one. - continue_training: when continue_training is True, once initialized - model will be continuely trained on every call of fit. - config: RunConfig object that controls the configurations of the session, - e.g. num_cores, gpu_memory_fraction, etc. -- - - - -#### `tf.contrib.learn.TensorFlowRNNClassifier.__init__(rnn_size, n_classes, cell_type='gru', num_layers=1, input_op_fn=null_input_op_fn, initial_state=None, bidirectional=False, sequence_length=None, batch_size=32, steps=50, optimizer='Adagrad', learning_rate=0.1, class_weight=None, clip_gradients=5.0, continue_training=False, config=None, verbose=1)` {#TensorFlowRNNClassifier.__init__} - - - - -- - - - -#### `tf.contrib.learn.TensorFlowRNNClassifier.bias_` {#TensorFlowRNNClassifier.bias_} - -Returns bias of the rnn layer. - - -- - - - -#### `tf.contrib.learn.TensorFlowRNNClassifier.evaluate(x=None, y=None, input_fn=None, steps=None)` {#TensorFlowRNNClassifier.evaluate} - -See base class. - - -- - - - -#### `tf.contrib.learn.TensorFlowRNNClassifier.fit(x, y, steps=None, monitors=None, logdir=None)` {#TensorFlowRNNClassifier.fit} - -Builds a neural network model given provided `model_fn` and training -data X and y. - -Note: called first time constructs the graph and initializers -variables. Consecutives times it will continue training the same model. -This logic follows partial_fit() interface in scikit-learn. - -To restart learning, create new estimator. - -##### Args: - - -* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be - iterator that returns arrays of features. The training input - samples for fitting the model. - -* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be - iterator that returns array of targets. The training target values - (class labels in classification, real numbers in regression). - -* `steps`: int, number of steps to train. - If None or 0, train for `self.steps`. -* `monitors`: List of `BaseMonitor` objects to print training progress and - invoke early stopping. -* `logdir`: the directory to save the log file that can be used for - optional visualization. - -##### Returns: - - Returns self. - - -- - - - -#### `tf.contrib.learn.TensorFlowRNNClassifier.get_params(deep=True)` {#TensorFlowRNNClassifier.get_params} - -Get parameters for this estimator. - -##### Args: - - -* `deep`: boolean, optional - If True, will return the parameters for this estimator and - contained subobjects that are estimators. - -##### Returns: - - params : mapping of string to any - Parameter names mapped to their values. - - -- - - - -#### `tf.contrib.learn.TensorFlowRNNClassifier.get_tensor(name)` {#TensorFlowRNNClassifier.get_tensor} - -Returns tensor by name. - -##### Args: - - -* `name`: string, name of the tensor. - -##### Returns: - - Tensor. - - -- - - - -#### `tf.contrib.learn.TensorFlowRNNClassifier.get_tensor_value(name)` {#TensorFlowRNNClassifier.get_tensor_value} - -Returns value of the tensor give by name. - -##### Args: - - -* `name`: string, name of the tensor. - -##### Returns: - - Numpy array - value of the tensor. - - -- - - - -#### `tf.contrib.learn.TensorFlowRNNClassifier.get_variable_names()` {#TensorFlowRNNClassifier.get_variable_names} - -Returns list of all variable names in this model. - -##### Returns: - - List of names. - - -- - - - -#### `tf.contrib.learn.TensorFlowRNNClassifier.model_dir` {#TensorFlowRNNClassifier.model_dir} - - - - -- - - - -#### `tf.contrib.learn.TensorFlowRNNClassifier.partial_fit(x, y)` {#TensorFlowRNNClassifier.partial_fit} - -Incremental fit on a batch of samples. - -This method is expected to be called several times consecutively -on different or the same chunks of the dataset. This either can -implement iterative training or out-of-core/online training. - -This is especially useful when the whole dataset is too big to -fit in memory at the same time. Or when model is taking long time -to converge, and you want to split up training into subparts. - -##### Args: - - -* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be - iterator that returns arrays of features. The training input - samples for fitting the model. - -* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be - iterator that returns array of targets. The training target values - (class label in classification, real numbers in regression). - -##### Returns: - - Returns self. - - -- - - - -#### `tf.contrib.learn.TensorFlowRNNClassifier.predict(x, axis=1, batch_size=None)` {#TensorFlowRNNClassifier.predict} - -Predict class or regression for X. - -For a classification model, the predicted class for each sample in X is -returned. For a regression model, the predicted value based on X is -returned. - -##### Args: - - -* `x`: array-like matrix, [n_samples, n_features...] or iterator. -* `axis`: Which axis to argmax for classification. - By default axis 1 (next after batch) is used. - Use 2 for sequence predictions. -* `batch_size`: If test set is too big, use batch size to split - it into mini batches. By default the batch_size member - variable is used. - -##### Returns: - - -* `y`: array of shape [n_samples]. The predicted classes or predicted - value. - - -- - - - -#### `tf.contrib.learn.TensorFlowRNNClassifier.predict_proba(x, batch_size=None)` {#TensorFlowRNNClassifier.predict_proba} - -Predict class probability of the input samples X. - -##### Args: - - -* `x`: array-like matrix, [n_samples, n_features...] or iterator. -* `batch_size`: If test set is too big, use batch size to split - it into mini batches. By default the batch_size member variable is used. - -##### Returns: - - -* `y`: array of shape [n_samples, n_classes]. The predicted - probabilities for each class. - - -- - - - -#### `tf.contrib.learn.TensorFlowRNNClassifier.restore(cls, path, config=None)` {#TensorFlowRNNClassifier.restore} - -Restores model from give path. - -##### Args: - - -* `path`: Path to the checkpoints and other model information. -* `config`: RunConfig object that controls the configurations of the session, - e.g. num_cores, gpu_memory_fraction, etc. This is allowed to be - reconfigured. - -##### Returns: - - Estimator, object of the subclass of TensorFlowEstimator. - - -- - - - -#### `tf.contrib.learn.TensorFlowRNNClassifier.save(path)` {#TensorFlowRNNClassifier.save} - -Saves checkpoints and graph to given path. - -##### Args: - - -* `path`: Folder to save model to. - - -- - - - -#### `tf.contrib.learn.TensorFlowRNNClassifier.set_params(**params)` {#TensorFlowRNNClassifier.set_params} - -Set the parameters of this estimator. - -The method works on simple estimators as well as on nested objects -(such as pipelines). The former have parameters of the form -``__`` so that it's possible to update each -component of a nested object. - -##### Returns: - - self - - -- - - - -#### `tf.contrib.learn.TensorFlowRNNClassifier.train(input_fn, steps, monitors=None)` {#TensorFlowRNNClassifier.train} - -Trains a model given input builder function. - -##### Args: - - -* `input_fn`: Input builder function, returns tuple of dicts or - dict and Tensor. -* `steps`: number of steps to train model for. -* `monitors`: List of `BaseMonitor` subclass instances. Used for callbacks - inside the training loop. - -##### Returns: - - Returns self. - - -- - - - -#### `tf.contrib.learn.TensorFlowRNNClassifier.weights_` {#TensorFlowRNNClassifier.weights_} - -Returns weights of the rnn layer. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.TensorFlowRegressor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.TensorFlowRegressor.md new file mode 100644 index 0000000000..169509f72f --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.TensorFlowRegressor.md @@ -0,0 +1,279 @@ +TensorFlow Linear Regression model. +- - - + +#### `tf.contrib.learn.TensorFlowRegressor.__init__(n_classes=0, batch_size=32, steps=200, optimizer='Adagrad', learning_rate=0.1, clip_gradients=5.0, continue_training=False, config=None, verbose=1)` {#TensorFlowRegressor.__init__} + + + + +- - - + +#### `tf.contrib.learn.TensorFlowRegressor.bias_` {#TensorFlowRegressor.bias_} + +Returns bias of the linear regression. + + +- - - + +#### `tf.contrib.learn.TensorFlowRegressor.evaluate(x=None, y=None, input_fn=None, steps=None)` {#TensorFlowRegressor.evaluate} + +See base class. + + +- - - + +#### `tf.contrib.learn.TensorFlowRegressor.fit(x, y, steps=None, monitors=None, logdir=None)` {#TensorFlowRegressor.fit} + +Builds a neural network model given provided `model_fn` and training +data X and y. + +Note: called first time constructs the graph and initializers +variables. Consecutives times it will continue training the same model. +This logic follows partial_fit() interface in scikit-learn. + +To restart learning, create new estimator. + +##### Args: + + +* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be + iterator that returns arrays of features. The training input + samples for fitting the model. + +* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be + iterator that returns array of targets. The training target values + (class labels in classification, real numbers in regression). + +* `steps`: int, number of steps to train. + If None or 0, train for `self.steps`. +* `monitors`: List of `BaseMonitor` objects to print training progress and + invoke early stopping. +* `logdir`: the directory to save the log file that can be used for + optional visualization. + +##### Returns: + + Returns self. + + +- - - + +#### `tf.contrib.learn.TensorFlowRegressor.get_params(deep=True)` {#TensorFlowRegressor.get_params} + +Get parameters for this estimator. + +##### Args: + + +* `deep`: boolean, optional + If True, will return the parameters for this estimator and + contained subobjects that are estimators. + +##### Returns: + + params : mapping of string to any + Parameter names mapped to their values. + + +- - - + +#### `tf.contrib.learn.TensorFlowRegressor.get_tensor(name)` {#TensorFlowRegressor.get_tensor} + +Returns tensor by name. + +##### Args: + + +* `name`: string, name of the tensor. + +##### Returns: + + Tensor. + + +- - - + +#### `tf.contrib.learn.TensorFlowRegressor.get_tensor_value(name)` {#TensorFlowRegressor.get_tensor_value} + +Returns value of the tensor give by name. + +##### Args: + + +* `name`: string, name of the tensor. + +##### Returns: + + Numpy array - value of the tensor. + + +- - - + +#### `tf.contrib.learn.TensorFlowRegressor.get_variable_names()` {#TensorFlowRegressor.get_variable_names} + +Returns list of all variable names in this model. + +##### Returns: + + List of names. + + +- - - + +#### `tf.contrib.learn.TensorFlowRegressor.model_dir` {#TensorFlowRegressor.model_dir} + + + + +- - - + +#### `tf.contrib.learn.TensorFlowRegressor.partial_fit(x, y)` {#TensorFlowRegressor.partial_fit} + +Incremental fit on a batch of samples. + +This method is expected to be called several times consecutively +on different or the same chunks of the dataset. This either can +implement iterative training or out-of-core/online training. + +This is especially useful when the whole dataset is too big to +fit in memory at the same time. Or when model is taking long time +to converge, and you want to split up training into subparts. + +##### Args: + + +* `x`: matrix or tensor of shape [n_samples, n_features...]. Can be + iterator that returns arrays of features. The training input + samples for fitting the model. + +* `y`: vector or matrix [n_samples] or [n_samples, n_outputs]. Can be + iterator that returns array of targets. The training target values + (class label in classification, real numbers in regression). + +##### Returns: + + Returns self. + + +- - - + +#### `tf.contrib.learn.TensorFlowRegressor.predict(x, axis=1, batch_size=None)` {#TensorFlowRegressor.predict} + +Predict class or regression for X. + +For a classification model, the predicted class for each sample in X is +returned. For a regression model, the predicted value based on X is +returned. + +##### Args: + + +* `x`: array-like matrix, [n_samples, n_features...] or iterator. +* `axis`: Which axis to argmax for classification. + By default axis 1 (next after batch) is used. + Use 2 for sequence predictions. +* `batch_size`: If test set is too big, use batch size to split + it into mini batches. By default the batch_size member + variable is used. + +##### Returns: + + +* `y`: array of shape [n_samples]. The predicted classes or predicted + value. + + +- - - + +#### `tf.contrib.learn.TensorFlowRegressor.predict_proba(x, batch_size=None)` {#TensorFlowRegressor.predict_proba} + +Predict class probability of the input samples X. + +##### Args: + + +* `x`: array-like matrix, [n_samples, n_features...] or iterator. +* `batch_size`: If test set is too big, use batch size to split + it into mini batches. By default the batch_size member variable is used. + +##### Returns: + + +* `y`: array of shape [n_samples, n_classes]. The predicted + probabilities for each class. + + +- - - + +#### `tf.contrib.learn.TensorFlowRegressor.restore(cls, path, config=None)` {#TensorFlowRegressor.restore} + +Restores model from give path. + +##### Args: + + +* `path`: Path to the checkpoints and other model information. +* `config`: RunConfig object that controls the configurations of the session, + e.g. num_cores, gpu_memory_fraction, etc. This is allowed to be + reconfigured. + +##### Returns: + + Estimator, object of the subclass of TensorFlowEstimator. + + +- - - + +#### `tf.contrib.learn.TensorFlowRegressor.save(path)` {#TensorFlowRegressor.save} + +Saves checkpoints and graph to given path. + +##### Args: + + +* `path`: Folder to save model to. + + +- - - + +#### `tf.contrib.learn.TensorFlowRegressor.set_params(**params)` {#TensorFlowRegressor.set_params} + +Set the parameters of this estimator. + +The method works on simple estimators as well as on nested objects +(such as pipelines). The former have parameters of the form +``__`` so that it's possible to update each +component of a nested object. + +##### Returns: + + self + + +- - - + +#### `tf.contrib.learn.TensorFlowRegressor.train(input_fn, steps, monitors=None)` {#TensorFlowRegressor.train} + +Trains a model given input builder function. + +##### Args: + + +* `input_fn`: Input builder function, returns tuple of dicts or + dict and Tensor. +* `steps`: number of steps to train model for. +* `monitors`: List of `BaseMonitor` subclass instances. Used for callbacks + inside the training loop. + +##### Returns: + + Returns self. + + +- - - + +#### `tf.contrib.learn.TensorFlowRegressor.weights_` {#TensorFlowRegressor.weights_} + +Returns weights of the linear regression. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.extract_dask_data.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.extract_dask_data.md deleted file mode 100644 index a14a51ff56..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.extract_dask_data.md +++ /dev/null @@ -1,4 +0,0 @@ -### `tf.contrib.learn.extract_dask_data(data)` {#extract_dask_data} - -Extract data from dask.Series or dask.DataFrame for predictors - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.extract_pandas_labels.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.extract_pandas_labels.md new file mode 100644 index 0000000000..521a8560e5 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.extract_pandas_labels.md @@ -0,0 +1,4 @@ +### `tf.contrib.learn.extract_pandas_labels(labels)` {#extract_pandas_labels} + +Extract data from pandas.DataFrame for labels + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.metrics.set_difference.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.metrics.set_difference.md deleted file mode 100644 index 452115a428..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.metrics.set_difference.md +++ /dev/null @@ -1,24 +0,0 @@ -### `tf.contrib.metrics.set_difference(a, b, aminusb=True, validate_indices=True)` {#set_difference} - -Compute set difference of elements in last dimension of `a` and `b`. - -All but the last dimension of `a` and `b` must match. - -##### Args: - - -* `a`: `Tensor` or `SparseTensor` of the same type as `b`. If sparse, indices - must be sorted in row-major order. -* `b`: `Tensor` or `SparseTensor` of the same type as `a`. Must be - `SparseTensor` if `a` is `SparseTensor`. If sparse, indices must be - sorted in row-major order. -* `aminusb`: Whether to subtract `b` from `a`, vs vice versa. -* `validate_indices`: Whether to validate the order and range of sparse indices - in `a` and `b`. - -##### Returns: - - A `SparseTensor` with the same rank as `a` and `b`, and all but the last - dimension the same. Elements along the last dimension contain the - differences. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.metrics.set_size.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.metrics.set_size.md deleted file mode 100644 index 8f58261e7d..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.metrics.set_size.md +++ /dev/null @@ -1,22 +0,0 @@ -### `tf.contrib.metrics.set_size(a, validate_indices=True)` {#set_size} - -Compute number of unique elements along last dimension of `a`. - -##### Args: - - -* `a`: `SparseTensor`, with indices sorted in row-major order. -* `validate_indices`: Whether to validate the order and range of sparse indices - in `a`. - -##### Returns: - - For `a` ranked `n`, this is a `Tensor` with rank `n-1`, and the same 1st - `n-1` dimensions as `a`. Each value is the number of unique elements in - the corresponding `[0...n-1]` dimension of `a`. - -##### Raises: - - -* `TypeError`: If `a` is an invalid types. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.metrics.streaming_mean_absolute_error.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.metrics.streaming_mean_absolute_error.md deleted file mode 100644 index b4ecd6e916..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.metrics.streaming_mean_absolute_error.md +++ /dev/null @@ -1,48 +0,0 @@ -### `tf.contrib.metrics.streaming_mean_absolute_error(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_mean_absolute_error} - -Computes the mean absolute error between the labels and predictions. - -The `streaming_mean_absolute_error` function creates two local variables, -`total` and `count` that are used to compute the mean absolute error. This -average is ultimately returned as `mean_absolute_error`: an idempotent -operation that simply divides `total` by `count`. To facilitate the estimation -of the mean absolute error over a stream of data, the function utilizes two -operations. First, an `absolute_errors` operation computes the absolute value -of the differences between `predictions` and `labels`. Second, an `update_op` -operation whose behavior is dependent on the value of `weights`. If `weights` -is None, then `update_op` increments `total` with the reduced sum of -`absolute_errors` and increments `count` with the number of elements in -`absolute_errors`. If `weights` is not `None`, then `update_op` increments -`total` with the reduced sum of the product of `weights` and `absolute_errors` -and increments `count` with the reduced sum of `weights`. In addition to -performing the updates, `update_op` also returns the `mean_absolute_error` -value. - -##### Args: - - -* `predictions`: A `Tensor` of arbitrary shape. -* `labels`: A `Tensor` of the same shape as `predictions`. -* `weights`: An optional set of weights of the same shape as `predictions`. If - `weights` is not None, the function computes a weighted mean. -* `metrics_collections`: An optional list of collections that - `mean_absolute_error` should be added to. -* `updates_collections`: An optional list of collections that `update_op` should - be added to. -* `name`: An optional variable_op_scope name. - -##### Returns: - - -* `mean_absolute_error`: A tensor representing the current mean, the value of - `total` divided by `count`. -* `update_op`: An operation that increments the `total` and `count` variables - appropriately and whose value matches `mean_absolute_error`. - -##### Raises: - - -* `ValueError`: If `weights` is not `None` and its shape doesn't match - `predictions` or if either `metrics_collections` or `updates_collections` - are not a list or tuple. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.metrics.streaming_recall.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.metrics.streaming_recall.md deleted file mode 100644 index 26308e2f5f..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.metrics.streaming_recall.md +++ /dev/null @@ -1,50 +0,0 @@ -### `tf.contrib.metrics.streaming_recall(predictions, labels, ignore_mask=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_recall} - -Computes the recall of the predictions with respect to the labels. - -The `streaming_recall` function creates two local variables, -`true_positives` and `false_negatives`, that are used to compute the -recall. This value is ultimately returned as `recall`, an idempotent -operation that simply divides `true_positives` by the sum of `true_positives` -and `false_negatives`. To facilitate the calculation of the recall over a -stream of data, the function creates an `update_op` operation whose behavior -is dependent on the value of `ignore_mask`. If `ignore_mask` is None, then -`update_op` increments `true_positives` with the number of elements of -`predictions` and `labels` that are both `True` and increments -`false_negatives` with the number of elements of `predictions` that are -`False` whose corresponding `labels` element is `False`. If `ignore_mask` is -not `None`, then the increments for `true_positives` and `false_negatives` are -only computed using elements of `predictions` and `labels` whose corresponding -values in `ignore_mask` are `False`. In addition to performing the updates, -`update_op` also returns the value of `recall`. - -##### Args: - - -* `predictions`: The predicted values, a binary `Tensor` of arbitrary shape. -* `labels`: The ground truth values, a binary `Tensor` whose dimensions must - match `predictions`. -* `ignore_mask`: An optional, binary tensor whose size matches `predictions`. -* `metrics_collections`: An optional list of collections that `precision` should - be added to. -* `updates_collections`: An optional list of collections that `update_op` should - be added to. -* `name`: An optional variable_op_scope name. - -##### Returns: - - -* `recall`: Scalar float `Tensor` with the value of `true_positives` divided - by the sum of `true_positives` and `false_negatives`. -* `update_op`: `Operation` that increments `true_positives` and - `false_negatives` variables appropriately and whose value matches - `recall`. - -##### Raises: - - -* `ValueError`: If the dimensions of `predictions` and `labels` don't match or - if `ignore_mask` is not `None` and its shape doesn't match `predictions` - or if either `metrics_collections` or `updates_collections` are not a list - or tuple. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.metrics.streaming_root_mean_squared_error.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.metrics.streaming_root_mean_squared_error.md new file mode 100644 index 0000000000..85319f44dd --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.metrics.streaming_root_mean_squared_error.md @@ -0,0 +1,48 @@ +### `tf.contrib.metrics.streaming_root_mean_squared_error(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_root_mean_squared_error} + +Computes the root mean squared error between the labels and predictions. + +The `streaming_root_mean_squared_error` function creates two local variables, +`total` and `count` that are used to compute the root mean squared error. +This average is ultimately returned as `root_mean_squared_error`: an +idempotent operation that takes the square root of the division of `total` +by `count`. To facilitate the estimation of the root mean squared error over a +stream of data, the function utilizes two operations. First, a `squared_error` +operation computes the element-wise square of the difference between +`predictions` and `labels`. Second, an `update_op` operation whose behavior is +dependent on the value of `weights`. If `weights` is None, then `update_op` +increments `total` with the reduced sum of `squared_error` and increments +`count` with the number of elements in `squared_error`. If `weights` is not +`None`, then `update_op` increments `total` with the reduced sum of the +product of `weights` and `squared_error` and increments `count` with the +reduced sum of `weights`. In addition to performing the updates, `update_op` +also returns the `root_mean_squared_error` value. + +##### Args: + + +* `predictions`: A `Tensor` of arbitrary shape. +* `labels`: A `Tensor` of the same shape as `predictions`. +* `weights`: An optional set of weights of the same shape as `predictions`. If + `weights` is not None, the function computes a weighted mean. +* `metrics_collections`: An optional list of collections that + `root_mean_squared_error` should be added to. +* `updates_collections`: An optional list of collections that `update_op` should + be added to. +* `name`: An optional variable_op_scope name. + +##### Returns: + + +* `root_mean_squared_error`: A tensor representing the current mean, the value + of `total` divided by `count`. +* `update_op`: An operation that increments the `total` and `count` variables + appropriately and whose value matches `root_mean_squared_error`. + +##### Raises: + + +* `ValueError`: If `weights` is not `None` and its shape doesn't match + `predictions` or if either `metrics_collections` or `updates_collections` + are not a list or tuple. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.cos.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.cos.md deleted file mode 100644 index b4f6f89933..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.cos.md +++ /dev/null @@ -1,14 +0,0 @@ -### `tf.cos(x, name=None)` {#cos} - -Computes cos of x element-wise. - -##### Args: - - -* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `x`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.decode_raw.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.decode_raw.md new file mode 100644 index 0000000000..125c15d9a8 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.decode_raw.md @@ -0,0 +1,23 @@ +### `tf.decode_raw(bytes, out_type, little_endian=None, name=None)` {#decode_raw} + +Reinterpret the bytes of a string as a vector of numbers. + +##### Args: + + +* `bytes`: A `Tensor` of type `string`. + All the elements must have the same length. +* `out_type`: A `tf.DType` from: `tf.float32, tf.float64, tf.int32, tf.uint8, tf.int16, tf.int8, tf.int64`. +* `little_endian`: An optional `bool`. Defaults to `True`. + Whether the input `bytes` are in little-endian order. + Ignored for `out_type` values that are stored in a single byte like + `uint8`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `out_type`. + A Tensor with one more dimension than the input `bytes`. The + added dimension will have size equal to the length of the elements + of `bytes` divided by the number of bytes to represent `out_type`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.dynamic_stitch.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.dynamic_stitch.md new file mode 100644 index 0000000000..6bb1f8dd10 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.dynamic_stitch.md @@ -0,0 +1,53 @@ +### `tf.dynamic_stitch(indices, data, name=None)` {#dynamic_stitch} + +Interleave the values from the `data` tensors into a single tensor. + +Builds a merged tensor such that + + merged[indices[m][i, ..., j], ...] = data[m][i, ..., j, ...] + +For example, if each `indices[m]` is scalar or vector, we have + + # Scalar indices + merged[indices[m], ...] = data[m][...] + + # Vector indices + merged[indices[m][i], ...] = data[m][i, ...] + +Each `data[i].shape` must start with the corresponding `indices[i].shape`, +and the rest of `data[i].shape` must be constant w.r.t. `i`. That is, we +must have `data[i].shape = indices[i].shape + constant`. In terms of this +`constant`, the output shape is + + merged.shape = [max(indices)] + constant + +Values are merged in order, so if an index appears in both `indices[m][i]` and +`indices[n][j]` for `(m,i) < (n,j)` the slice `data[n][j]` will appear in the +merged result. + +For example: + + indices[0] = 6 + indices[1] = [4, 1] + indices[2] = [[5, 2], [0, 3]] + data[0] = [61, 62] + data[1] = [[41, 42], [11, 12]] + data[2] = [[[51, 52], [21, 22]], [[1, 2], [31, 32]]] + merged = [[1, 2], [11, 12], [21, 22], [31, 32], [41, 42], + [51, 52], [61, 62]] + +
+ +
+ +##### Args: + + +* `indices`: A list of at least 2 `Tensor` objects of type `int32`. +* `data`: A list with the same number of `Tensor` objects as `indices` of `Tensor` objects of the same type. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `data`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.edit_distance.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.edit_distance.md new file mode 100644 index 0000000000..e5f6471817 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.edit_distance.md @@ -0,0 +1,65 @@ +### `tf.edit_distance(hypothesis, truth, normalize=True, name='edit_distance')` {#edit_distance} + +Computes the Levenshtein distance between sequences. + +This operation takes variable-length sequences (`hypothesis` and `truth`), +each provided as a `SparseTensor`, and computes the Levenshtein distance. +You can normalize the edit distance by length of `truth` by setting +`normalize` to true. + +For example, given the following input: + +```python +# 'hypothesis' is a tensor of shape `[2, 1]` with variable-length values: +# (0,0) = ["a"] +# (1,0) = ["b"] +hypothesis = tf.SparseTensor( + [[0, 0, 0], + [1, 0, 0]], + ["a", "b"] + (2, 1, 1)) + +# 'truth' is a tensor of shape `[2, 2]` with variable-length values: +# (0,0) = [] +# (0,1) = ["a"] +# (1,0) = ["b", "c"] +# (1,1) = ["a"] +truth = tf.SparseTensor( + [[0, 1, 0], + [1, 0, 0], + [1, 0, 1], + [1, 1, 0]] + ["a", "b", "c", "a"], + (2, 2, 2)) + +normalize = True +``` + +This operation would return the following: + +```python +# 'output' is a tensor of shape `[2, 2]` with edit distances normalized +# by 'truth' lengths. +output ==> [[inf, 1.0], # (0,0): no truth, (0,1): no hypothesis + [0.5, 1.0]] # (1,0): addition, (1,1): no hypothesis +``` + +##### Args: + + +* `hypothesis`: A `SparseTensor` containing hypothesis sequences. +* `truth`: A `SparseTensor` containing truth sequences. +* `normalize`: A `bool`. If `True`, normalizes the Levenshtein distance by + length of `truth.` +* `name`: A name for the operation (optional). + +##### Returns: + + A dense `Tensor` with rank `R - 1`, where R is the rank of the + `SparseTensor` inputs `hypothesis` and `truth`. + +##### Raises: + + +* `TypeError`: If either `hypothesis` or `truth` are not a `SparseTensor`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.erfc.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.erfc.md deleted file mode 100644 index d2ac7952e0..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.erfc.md +++ /dev/null @@ -1,14 +0,0 @@ -### `tf.erfc(x, name=None)` {#erfc} - -Computes the complementary error function of `x` element-wise. - -##### Args: - - -* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `x`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.errors.InvalidArgumentError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.errors.InvalidArgumentError.md deleted file mode 100644 index 877325fe0b..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.errors.InvalidArgumentError.md +++ /dev/null @@ -1,17 +0,0 @@ -Raised when an operation receives an invalid argument. - -This may occur, for example, if an operation is receives an input -tensor that has an invalid value or shape. For example, the -[`tf.matmul()`](../../api_docs/python/math_ops.md#matmul) op will raise this -error if it receives an input that is not a matrix, and the -[`tf.reshape()`](../../api_docs/python/array_ops.md#reshape) op will raise -this error if the new shape does not match the number of elements in the input -tensor. - -- - - - -#### `tf.errors.InvalidArgumentError.__init__(node_def, op, message)` {#InvalidArgumentError.__init__} - -Creates an `InvalidArgumentError`. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.errors.UnimplementedError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.errors.UnimplementedError.md new file mode 100644 index 0000000000..945daa1a22 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.errors.UnimplementedError.md @@ -0,0 +1,15 @@ +Raised when an operation has not been implemented. + +Some operations may raise this error when passed otherwise-valid +arguments that it does not currently support. For example, running +the [`tf.nn.max_pool()`](../../api_docs/python/nn.md#max_pool) operation +would raise this error if pooling was requested on the batch dimension, +because this is not yet supported. + +- - - + +#### `tf.errors.UnimplementedError.__init__(node_def, op, message)` {#UnimplementedError.__init__} + +Creates an `UnimplementedError`. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.errors.UnknownError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.errors.UnknownError.md deleted file mode 100644 index 3e18ec866b..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.errors.UnknownError.md +++ /dev/null @@ -1,15 +0,0 @@ -Unknown error. - -An example of where this error may be returned is if a Status value -received from another address space belongs to an error-space that -is not known to this address space. Also errors raised by APIs that -do not return enough error information may be converted to this -error. - -- - - - -#### `tf.errors.UnknownError.__init__(node_def, op, message, error_code=2)` {#UnknownError.__init__} - -Creates an `UnknownError`. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.fft2d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.fft2d.md new file mode 100644 index 0000000000..e480dcb27e --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.fft2d.md @@ -0,0 +1,14 @@ +### `tf.fft2d(input, name=None)` {#fft2d} + +Compute the 2-dimensional discrete Fourier Transform. + +##### Args: + + +* `input`: A `Tensor` of type `complex64`. A complex64 matrix. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `complex64`. The 2D Fourier Transform of `input`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.foldr.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.foldr.md deleted file mode 100644 index 0a75190c04..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.foldr.md +++ /dev/null @@ -1,44 +0,0 @@ -### `tf.foldr(fn, elems, initializer=None, parallel_iterations=10, back_prop=True, swap_memory=False, name=None)` {#foldr} - -foldr on the list of tensors unpacked from `elems` on dimension 0. - -This foldr operator repeatedly applies the callable `fn` to a sequence -of elements from last to first. The elements are made of the tensors -unpacked from `elems`. The callable fn takes two tensors as arguments. -The first argument is the accumulated value computed from the preceding -invocation of fn. If `initializer` is None, `elems` must contain at least -one element, and its first element is used as the initializer. - -Suppose that `elems` is unpacked into `values`, a list of tensors. The shape -of the result tensor is `fn(initializer, values[0]).shape`. - -##### Args: - - -* `fn`: The callable to be performed. -* `elems`: A tensor that is unpacked into a sequence of tensors to apply `fn`. -* `initializer`: (optional) The initial value for the accumulator. -* `parallel_iterations`: (optional) The number of iterations allowed to run - in parallel. -* `back_prop`: (optional) True enables back propagation. -* `swap_memory`: (optional) True enables GPU-CPU memory swapping. -* `name`: (optional) Name prefix for the returned tensors. - -##### Returns: - - A tensor resulting from applying `fn` consecutively to the list of tensors - unpacked from `elems`, from last to first. - -##### Raises: - - -* `TypeError`: if `fn` is not callable. - -##### Example: - - ```python - elems = [1, 2, 3, 4, 5, 6] - sum = foldr(lambda a, x: a + x, elems) - # sum == 21 - ``` - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.get_variable.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.get_variable.md new file mode 100644 index 0000000000..59e1a1797a --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.get_variable.md @@ -0,0 +1,72 @@ +### `tf.get_variable(name, shape=None, dtype=tf.float32, initializer=None, regularizer=None, trainable=True, collections=None, caching_device=None, partitioner=None, validate_shape=True)` {#get_variable} + +Gets an existing variable with these parameters or create a new one. + +This function prefixes the name with the current variable scope +and performs reuse checks. See the +[Variable Scope How To](../../how_tos/variable_scope/index.md) +for an extensive description of how reusing works. Here is a basic example: + +```python +with tf.variable_scope("foo"): + v = tf.get_variable("v", [1]) # v.name == "foo/v:0" + w = tf.get_variable("w", [1]) # w.name == "foo/w:0" +with tf.variable_scope("foo", reuse=True) + v1 = tf.get_variable("v") # The same as v above. +``` + +If initializer is `None` (the default), the default initializer passed in +the variable scope will be used. If that one is `None` too, a +`UniformUnitScalingInitializer` will be used. The initializer can also be +a Tensor, in which case the variable is initialized to this value and shape. + +Similarly, if the regularizer is `None` (the default), the default regularizer +passed in the variable scope will be used (if that is `None` too, +then by default no regularization is performed). + +If a partitioner is provided, first a sharded `Variable` is created +via `_get_partitioned_variable_list`, and the return value is a +`Tensor` composed of the shards concatenated along the partition axis. + +Some useful partitioners are available. See, e.g., +`variable_axis_size_partitioner`. + +##### Args: + + +* `name`: The name of the new or existing variable. +* `shape`: Shape of the new or existing variable. +* `dtype`: Type of the new or existing variable (defaults to `DT_FLOAT`). +* `initializer`: Initializer for the variable if one is created. +* `regularizer`: A (Tensor -> Tensor or None) function; the result of + applying it on a newly created variable will be added to the collection + GraphKeys.REGULARIZATION_LOSSES and can be used for regularization. +* `trainable`: If `True` also add the variable to the graph collection + `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable). +* `collections`: List of graph collections keys to add the Variable to. + Defaults to `[GraphKeys.VARIABLES]` (see tf.Variable). + If partitioning is enabled and used, the concatenated return value + is also added to collection `GraphKeys.CONCATENATED_VARIABLES`. +* `caching_device`: Optional device string or function describing where the + Variable should be cached for reading. Defaults to the Variable's + device. If not `None`, caches on another device. Typical use is to + cache on the device where the Ops using the Variable reside, to + deduplicate copying through `Switch` and other conditional statements. +* `partitioner`: Optional callable that accepts a fully defined `TensorShape` + and `dtype` of the Variable to be created, and returns a list of + partitions for each axis (currently only one axis can be partitioned). +* `validate_shape`: If False, allows the variable to be initialized with a + value of unknown shape. If True, the default, the shape of initial_value + must be known. + +##### Returns: + + The created or existing variable. + +##### Raises: + + +* `ValueError`: when creating a new variable and shape is not declared, + or when violating reuse during variable creation. Reuse is set inside + `variable_scope`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.igammac.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.igammac.md deleted file mode 100644 index 1b739bcfca..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.igammac.md +++ /dev/null @@ -1,29 +0,0 @@ -### `tf.igammac(a, x, name=None)` {#igammac} - -Compute the upper regularized incomplete Gamma function `Q(a, x)`. - -The upper regularized incomplete Gamma function is defined as: - -``` -Q(a, x) = Gamma(a, x) / Gamma(x) = 1 - P(a, x) -``` -where -``` -Gamma(a, x) = int_{x}^{\infty} t^{a-1} exp(-t) dt -``` -is the upper incomplete Gama function. - -Note, above `P(a, x)` (`Igamma`) is the lower regularized complete -Gamma function. - -##### Args: - - -* `a`: A `Tensor`. Must be one of the following types: `float32`, `float64`. -* `x`: A `Tensor`. Must have the same type as `a`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `a`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.image.random_contrast.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.image.random_contrast.md deleted file mode 100644 index 76cd2292cf..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.image.random_contrast.md +++ /dev/null @@ -1,26 +0,0 @@ -### `tf.image.random_contrast(image, lower, upper, seed=None)` {#random_contrast} - -Adjust the contrast of an image by a random factor. - -Equivalent to `adjust_contrast()` but uses a `contrast_factor` randomly -picked in the interval `[lower, upper]`. - -##### Args: - - -* `image`: An image tensor with 3 or more dimensions. -* `lower`: float. Lower bound for the random contrast factor. -* `upper`: float. Upper bound for the random contrast factor. -* `seed`: A Python integer. Used to create a random seed. See - [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) - for behavior. - -##### Returns: - - The contrast-adjusted tensor. - -##### Raises: - - -* `ValueError`: if `upper <= lower` or if `lower < 0`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.image.resize_images.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.image.resize_images.md new file mode 100644 index 0000000000..d010cac831 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.image.resize_images.md @@ -0,0 +1,43 @@ +### `tf.image.resize_images(images, new_height, new_width, method=0, align_corners=False)` {#resize_images} + +Resize `images` to `new_width`, `new_height` using the specified `method`. + +Resized images will be distorted if their original aspect ratio is not +the same as `new_width`, `new_height`. To avoid distortions see +[`resize_image_with_crop_or_pad`](#resize_image_with_crop_or_pad). + +`method` can be one of: + +* `ResizeMethod.BILINEAR`: [Bilinear interpolation.] + (https://en.wikipedia.org/wiki/Bilinear_interpolation) +* `ResizeMethod.NEAREST_NEIGHBOR`: [Nearest neighbor interpolation.] + (https://en.wikipedia.org/wiki/Nearest-neighbor_interpolation) +* `ResizeMethod.BICUBIC`: [Bicubic interpolation.] + (https://en.wikipedia.org/wiki/Bicubic_interpolation) +* `ResizeMethod.AREA`: Area interpolation. + +##### Args: + + +* `images`: 4-D Tensor of shape `[batch, height, width, channels]` or + 3-D Tensor of shape `[height, width, channels]`. +* `new_height`: integer. +* `new_width`: integer. +* `method`: ResizeMethod. Defaults to `ResizeMethod.BILINEAR`. +* `align_corners`: bool. If true, exactly align all 4 corners of the input and + output. Defaults to `false`. + +##### Raises: + + +* `ValueError`: if the shape of `images` is incompatible with the + shape arguments to this function +* `ValueError`: if an unsupported resize method is specified. + +##### Returns: + + If `images` was 4-D, a 4-D float Tensor of shape + `[batch, new_height, new_width, channels]`. + If `images` was 3-D, a 3-D float Tensor of shape + `[new_height, new_width, channels]`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.image.resize_nearest_neighbor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.image.resize_nearest_neighbor.md deleted file mode 100644 index ba72e73ebd..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.image.resize_nearest_neighbor.md +++ /dev/null @@ -1,22 +0,0 @@ -### `tf.image.resize_nearest_neighbor(images, size, align_corners=None, name=None)` {#resize_nearest_neighbor} - -Resize `images` to `size` using nearest neighbor interpolation. - -##### Args: - - -* `images`: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int16`, `int32`, `int64`, `half`, `float32`, `float64`. - 4-D with shape `[batch, height, width, channels]`. -* `size`: A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The - new size for the images. -* `align_corners`: An optional `bool`. Defaults to `False`. - If true, rescale input by (new_height - 1) / (height - 1), which - exactly aligns the 4 corners of images and resized images. If false, rescale - by new_height / height. Treat similarly the width dimension. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `images`. 4-D with shape - `[batch, new_height, new_width, channels]`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.image.rgb_to_hsv.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.image.rgb_to_hsv.md deleted file mode 100644 index 7c5d05f515..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.image.rgb_to_hsv.md +++ /dev/null @@ -1,23 +0,0 @@ -### `tf.image.rgb_to_hsv(images, name=None)` {#rgb_to_hsv} - -Converts one or more images from RGB to HSV. - -Outputs a tensor of the same shape as the `images` tensor, containing the HSV -value of the pixels. The output is only well defined if the value in `images` -are in `[0,1]`. - -`output[..., 0]` contains hue, `output[..., 1]` contains saturation, and -`output[..., 2]` contains value. All HSV values are in `[0,1]`. A hue of 0 -corresponds to pure red, hue 1/3 is pure green, and 2/3 is pure blue. - -##### Args: - - -* `images`: A `Tensor` of type `float32`. - 1-D or higher rank. RGB data to convert. Last dimension must be size 3. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `float32`. `images` converted to HSV. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.image.sample_distorted_bounding_box.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.image.sample_distorted_bounding_box.md deleted file mode 100644 index 2831492f54..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.image.sample_distorted_bounding_box.md +++ /dev/null @@ -1,85 +0,0 @@ -### `tf.image.sample_distorted_bounding_box(image_size, bounding_boxes, seed=None, seed2=None, min_object_covered=None, aspect_ratio_range=None, area_range=None, max_attempts=None, use_image_if_no_bounding_boxes=None, name=None)` {#sample_distorted_bounding_box} - -Generate a single randomly distorted bounding box for an image. - -Bounding box annotations are often supplied in addition to ground-truth labels -in image recognition or object localization tasks. A common technique for -training such a system is to randomly distort an image while preserving -its content, i.e. *data augmentation*. This Op outputs a randomly distorted -localization of an object, i.e. bounding box, given an `image_size`, -`bounding_boxes` and a series of constraints. - -The output of this Op is a single bounding box that may be used to crop the -original image. The output is returned as 3 tensors: `begin`, `size` and -`bboxes`. The first 2 tensors can be fed directly into `tf.slice` to crop the -image. The latter may be supplied to `tf.image.draw_bounding_box` to visualize -what the bounding box looks like. - -Bounding boxes are supplied and returned as `[y_min, x_min, y_max, x_max]`. The -bounding box coordinates are floats in `[0.0, 1.0]` relative to the width and -height of the underlying image. - -For example, - - # Generate a single distorted bounding box. - begin, size, bbox_for_draw = tf.image.sample_distorted_bounding_box( - tf.shape(image), - bounding_boxes=bounding_boxes) - - # Draw the bounding box in an image summary. - image_with_box = tf.image.draw_bounding_boxes(tf.expand_dims(image, 0), - bbox_for_draw) - tf.image_summary('images_with_box', image_with_box) - - # Employ the bounding box to distort the image. - distorted_image = tf.slice(image, begin, size) - -Note that if no bounding box information is available, setting -`use_image_if_no_bounding_boxes = true` will assume there is a single implicit -bounding box covering the whole image. If `use_image_if_no_bounding_boxes` is -false and no bounding boxes are supplied, an error is raised. - -##### Args: - - -* `image_size`: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int16`, `int32`, `int64`. - 1-D, containing `[height, width, channels]`. -* `bounding_boxes`: A `Tensor` of type `float32`. - 3-D with shape `[batch, N, 4]` describing the N bounding boxes - associated with the image. -* `seed`: An optional `int`. Defaults to `0`. - If either `seed` or `seed2` are set to non-zero, the random number - generator is seeded by the given `seed`. Otherwise, it is seeded by a random - seed. -* `seed2`: An optional `int`. Defaults to `0`. - A second seed to avoid seed collision. -* `min_object_covered`: An optional `float`. Defaults to `0.1`. - The cropped area of the image must contain at least this - fraction of any bounding box supplied. -* `aspect_ratio_range`: An optional list of `floats`. Defaults to `[0.75, 1.33]`. - The cropped area of the image must have an aspect ratio = - width / height within this range. -* `area_range`: An optional list of `floats`. Defaults to `[0.05, 1]`. - The cropped area of the image must contain a fraction of the - supplied image within in this range. -* `max_attempts`: An optional `int`. Defaults to `100`. - Number of attempts at generating a cropped region of the image - of the specified constraints. After `max_attempts` failures, return the entire - image. -* `use_image_if_no_bounding_boxes`: An optional `bool`. Defaults to `False`. - Controls behavior if no bounding boxes supplied. - If true, assume an implicit bounding box covering the whole input. If false, - raise an error. -* `name`: A name for the operation (optional). - -##### Returns: - - A tuple of `Tensor` objects (begin, size, bboxes). - -* `begin`: A `Tensor`. Has the same type as `image_size`. 1-D, containing `[offset_height, offset_width, 0]`. Provide as input to - `tf.slice`. -* `size`: A `Tensor`. Has the same type as `image_size`. 1-D, containing `[target_height, target_width, -1]`. Provide as input to - `tf.slice`. -* `bboxes`: A `Tensor` of type `float32`. 3-D with shape `[1, 1, 4]` containing the distorted bounding box. - Provide as input to `tf.image.draw_bounding_boxes`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.lgamma.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.lgamma.md new file mode 100644 index 0000000000..2b8fda7dee --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.lgamma.md @@ -0,0 +1,14 @@ +### `tf.lgamma(x, name=None)` {#lgamma} + +Computes the log of the absolute value of `Gamma(x)` element-wise. + +##### Args: + + +* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `x`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.matching_files.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.matching_files.md new file mode 100644 index 0000000000..297462d580 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.matching_files.md @@ -0,0 +1,17 @@ +### `tf.matching_files(pattern, name=None)` {#matching_files} + +Returns the set of files matching a pattern. + +Note that this routine only supports wildcard characters in the +basename portion of the pattern, not in the directory portion. + +##### Args: + + +* `pattern`: A `Tensor` of type `string`. A (scalar) shell wildcard pattern. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `string`. A vector of matching filenames. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.merge_all_summaries.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.merge_all_summaries.md deleted file mode 100644 index 40143de15d..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.merge_all_summaries.md +++ /dev/null @@ -1,16 +0,0 @@ -### `tf.merge_all_summaries(key='summaries')` {#merge_all_summaries} - -Merges all summaries collected in the default graph. - -##### Args: - - -* `key`: `GraphKey` used to collect the summaries. Defaults to - `GraphKeys.SUMMARIES`. - -##### Returns: - - If no summaries were collected, returns None. Otherwise returns a scalar - `Tensor` of type `string` containing the serialized `Summary` protocol - buffer resulting from the merging. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.moving_average_variables.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.moving_average_variables.md deleted file mode 100644 index 467a666e2c..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.moving_average_variables.md +++ /dev/null @@ -1,13 +0,0 @@ -### `tf.moving_average_variables()` {#moving_average_variables} - -Returns all variables that maintain their moving averages. - -If an `ExponentialMovingAverage` object is created and the `apply()` -method is called on a list of variables, these variables will -be added to the `GraphKeys.MOVING_AVERAGE_VARIABLES` collection. -This convenience function returns the contents of that collection. - -##### Returns: - - A list of Variable objects. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.name_scope.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.name_scope.md deleted file mode 100644 index a003f2327f..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.name_scope.md +++ /dev/null @@ -1,18 +0,0 @@ -### `tf.name_scope(name)` {#name_scope} - -Wrapper for `Graph.name_scope()` using the default graph. - -See -[`Graph.name_scope()`](../../api_docs/python/framework.md#Graph.name_scope) -for more details. - -##### Args: - - -* `name`: A name for the scope. - -##### Returns: - - A context manager that installs `name` as a new name scope in the - default graph. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.max_pool.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.max_pool.md deleted file mode 100644 index f17efa01de..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.max_pool.md +++ /dev/null @@ -1,21 +0,0 @@ -### `tf.nn.max_pool(value, ksize, strides, padding, data_format='NHWC', name=None)` {#max_pool} - -Performs the max pooling on the input. - -##### Args: - - -* `value`: A 4-D `Tensor` with shape `[batch, height, width, channels]` and - type `tf.float32`. -* `ksize`: A list of ints that has length >= 4. The size of the window for - each dimension of the input tensor. -* `strides`: A list of ints that has length >= 4. The stride of the sliding - window for each dimension of the input tensor. -* `padding`: A string, either `'VALID'` or `'SAME'`. The padding algorithm. -* `data_format`: A string. 'NHWC' and 'NCHW' are supported. -* `name`: Optional name for the operation. - -##### Returns: - - A `Tensor` with type `tf.float32`. The max pooled output tensor. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.nce_loss.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.nce_loss.md new file mode 100644 index 0000000000..2fc7ab6b65 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.nce_loss.md @@ -0,0 +1,53 @@ +### `tf.nn.nce_loss(weights, biases, inputs, labels, num_sampled, num_classes, num_true=1, sampled_values=None, remove_accidental_hits=False, partition_strategy='mod', name='nce_loss')` {#nce_loss} + +Computes and returns the noise-contrastive estimation training loss. + +See [Noise-contrastive estimation: A new estimation principle for +unnormalized statistical models] +(http://www.jmlr.org/proceedings/papers/v9/gutmann10a/gutmann10a.pdf). +Also see our [Candidate Sampling Algorithms Reference] +(../../extras/candidate_sampling.pdf) + +Note: In the case where `num_true` > 1, we assign to each target class +the target probability 1 / `num_true` so that the target probabilities +sum to 1 per-example. + +Note: It would be useful to allow a variable number of target classes per +example. We hope to provide this functionality in a future release. +For now, if you have a variable number of target classes, you can pad them +out to a constant number by either repeating them or by padding +with an otherwise unused class. + +##### Args: + + +* `weights`: A `Tensor` of shape `[num_classes, dim]`, or a list of `Tensor` + objects whose concatenation along dimension 0 has shape + [num_classes, dim]. The (possibly-partitioned) class embeddings. +* `biases`: A `Tensor` of shape `[num_classes]`. The class biases. +* `inputs`: A `Tensor` of shape `[batch_size, dim]`. The forward + activations of the input network. +* `labels`: A `Tensor` of type `int64` and shape `[batch_size, + num_true]`. The target classes. +* `num_sampled`: An `int`. The number of classes to randomly sample per batch. +* `num_classes`: An `int`. The number of possible classes. +* `num_true`: An `int`. The number of target classes per training example. +* `sampled_values`: a tuple of (`sampled_candidates`, `true_expected_count`, + `sampled_expected_count`) returned by a `*_candidate_sampler` function. + (if None, we default to `log_uniform_candidate_sampler`) +* `remove_accidental_hits`: A `bool`. Whether to remove "accidental hits" + where a sampled class equals one of the target classes. If set to + `True`, this is a "Sampled Logistic" loss instead of NCE, and we are + learning to generate log-odds instead of log probabilities. See + our [Candidate Sampling Algorithms Reference] + (../../extras/candidate_sampling.pdf). + Default is False. +* `partition_strategy`: A string specifying the partitioning strategy, relevant + if `len(weights) > 1`. Currently `"div"` and `"mod"` are supported. + Default is `"mod"`. See `tf.nn.embedding_lookup` for more details. +* `name`: A name for the operation (optional). + +##### Returns: + + A `batch_size` 1-D tensor of per-example NCE losses. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.relu6.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.relu6.md deleted file mode 100644 index 9695e557eb..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.relu6.md +++ /dev/null @@ -1,15 +0,0 @@ -### `tf.nn.relu6(features, name=None)` {#relu6} - -Computes Rectified Linear 6: `min(max(features, 0), 6)`. - -##### Args: - - -* `features`: A `Tensor` with type `float`, `double`, `int32`, `int64`, `uint8`, - `int16`, or `int8`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` with the same type as `features`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.softplus.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.softplus.md new file mode 100644 index 0000000000..c0faef9687 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.softplus.md @@ -0,0 +1,14 @@ +### `tf.nn.softplus(features, name=None)` {#softplus} + +Computes softplus: `log(exp(features) + 1)`. + +##### Args: + + +* `features`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `features`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.no_op.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.no_op.md deleted file mode 100644 index c1b5c0824b..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.no_op.md +++ /dev/null @@ -1,13 +0,0 @@ -### `tf.no_op(name=None)` {#no_op} - -Does nothing. Only useful as a placeholder for control edges. - -##### Args: - - -* `name`: A name for the operation (optional). - -##### Returns: - - The created Operation. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.not_equal.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.not_equal.md new file mode 100644 index 0000000000..9c18792223 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.not_equal.md @@ -0,0 +1,15 @@ +### `tf.not_equal(x, y, name=None)` {#not_equal} + +Returns the truth value of (x != y) element-wise. + +##### Args: + + +* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`, `complex64`, `quint8`, `qint8`, `qint32`, `string`, `bool`, `complex128`. +* `y`: A `Tensor`. Must have the same type as `x`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `bool`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.ones_initializer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.ones_initializer.md deleted file mode 100644 index 0ddbc8b801..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.ones_initializer.md +++ /dev/null @@ -1,4 +0,0 @@ -### `tf.ones_initializer(shape, dtype=tf.float32)` {#ones_initializer} - -An adaptor for ones() to match the Initializer spec. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.polygamma.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.polygamma.md deleted file mode 100644 index c8b5b2578a..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.polygamma.md +++ /dev/null @@ -1,22 +0,0 @@ -### `tf.polygamma(a, x, name=None)` {#polygamma} - -Compute the polygamma function \\(\psi^{(n)}(x)\\). - -The polygamma function is defined as: - -``` -\psi^{(n)}(x) = \frac{d^n}{dx^n} \psi(x) -``` -where \\(\psi(x)\\) is the digamma function. - -##### Args: - - -* `a`: A `Tensor`. Must be one of the following types: `float32`, `float64`. -* `x`: A `Tensor`. Must have the same type as `a`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `a`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.python_io.TFRecordWriter.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.python_io.TFRecordWriter.md new file mode 100644 index 0000000000..4a67724209 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.python_io.TFRecordWriter.md @@ -0,0 +1,41 @@ +A class to write records to a TFRecords file. + +This class implements `__enter__` and `__exit__`, and can be used +in `with` blocks like a normal file. + +- - - + +#### `tf.python_io.TFRecordWriter.__init__(path)` {#TFRecordWriter.__init__} + +Opens file `path` and creates a `TFRecordWriter` writing to it. + +##### Args: + + +* `path`: The path to the TFRecords file. + +##### Raises: + + +* `IOError`: If `path` cannot be opened for writing. + + +- - - + +#### `tf.python_io.TFRecordWriter.write(record)` {#TFRecordWriter.write} + +Write a string record to the file. + +##### Args: + + +* `record`: str + + +- - - + +#### `tf.python_io.TFRecordWriter.close()` {#TFRecordWriter.close} + +Close the file. + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.random_normal.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.random_normal.md deleted file mode 100644 index 1344423202..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.random_normal.md +++ /dev/null @@ -1,23 +0,0 @@ -### `tf.random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)` {#random_normal} - -Outputs random values from a normal distribution. - -##### Args: - - -* `shape`: A 1-D integer Tensor or Python array. The shape of the output tensor. -* `mean`: A 0-D Tensor or Python value of type `dtype`. The mean of the normal - distribution. -* `stddev`: A 0-D Tensor or Python value of type `dtype`. The standard deviation - of the normal distribution. -* `dtype`: The type of the output. -* `seed`: A Python integer. Used to create a random seed for the distribution. - See - [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) - for behavior. -* `name`: A name for the operation (optional). - -##### Returns: - - A tensor of the specified shape filled with random normal values. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.random_normal_initializer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.random_normal_initializer.md new file mode 100644 index 0000000000..9f229e3b1c --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.random_normal_initializer.md @@ -0,0 +1,25 @@ +### `tf.random_normal_initializer(mean=0.0, stddev=1.0, seed=None, dtype=tf.float32)` {#random_normal_initializer} + +Returns an initializer that generates tensors with a normal distribution. + +##### Args: + + +* `mean`: a python scalar or a scalar tensor. Mean of the random values + to generate. +* `stddev`: a python scalar or a scalar tensor. Standard deviation of the + random values to generate. +* `seed`: A Python integer. Used to create random seeds. See + [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) + for behavior. +* `dtype`: The data type. Only floating point types are supported. + +##### Returns: + + An initializer that generates tensors with a normal distribution. + +##### Raises: + + +* `ValueError`: if `dtype` is not a floating point type. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.reduce_any.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.reduce_any.md new file mode 100644 index 0000000000..58a911a8cf --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.reduce_any.md @@ -0,0 +1,35 @@ +### `tf.reduce_any(input_tensor, reduction_indices=None, keep_dims=False, name=None)` {#reduce_any} + +Computes the "logical or" of elements across dimensions of a tensor. + +Reduces `input_tensor` along the dimensions given in `reduction_indices`. +Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each +entry in `reduction_indices`. If `keep_dims` is true, the reduced dimensions +are retained with length 1. + +If `reduction_indices` has no entries, all dimensions are reduced, and a +tensor with a single element is returned. + +For example: + +```python +# 'x' is [[True, True] +# [False, False]] +tf.reduce_any(x) ==> True +tf.reduce_any(x, 0) ==> [True, True] +tf.reduce_any(x, 1) ==> [True, False] +``` + +##### Args: + + +* `input_tensor`: The boolean tensor to reduce. +* `reduction_indices`: The dimensions to reduce. If `None` (the default), + reduces all dimensions. +* `keep_dims`: If true, retains reduced dimensions with length 1. +* `name`: A name for the operation (optional). + +##### Returns: + + The reduced tensor. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.reduce_sum.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.reduce_sum.md deleted file mode 100644 index edbb1ab055..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.reduce_sum.md +++ /dev/null @@ -1,37 +0,0 @@ -### `tf.reduce_sum(input_tensor, reduction_indices=None, keep_dims=False, name=None)` {#reduce_sum} - -Computes the sum of elements across dimensions of a tensor. - -Reduces `input_tensor` along the dimensions given in `reduction_indices`. -Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each -entry in `reduction_indices`. If `keep_dims` is true, the reduced dimensions -are retained with length 1. - -If `reduction_indices` has no entries, all dimensions are reduced, and a -tensor with a single element is returned. - -For example: - -```python -# 'x' is [[1, 1, 1] -# [1, 1, 1]] -tf.reduce_sum(x) ==> 6 -tf.reduce_sum(x, 0) ==> [2, 2, 2] -tf.reduce_sum(x, 1) ==> [3, 3] -tf.reduce_sum(x, 1, keep_dims=True) ==> [[3], [3]] -tf.reduce_sum(x, [0, 1]) ==> 6 -``` - -##### Args: - - -* `input_tensor`: The tensor to reduce. Should have numeric type. -* `reduction_indices`: The dimensions to reduce. If `None` (the default), - reduces all dimensions. -* `keep_dims`: If true, retains reduced dimensions with length 1. -* `name`: A name for the operation (optional). - -##### Returns: - - The reduced tensor. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.reset_default_graph.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.reset_default_graph.md new file mode 100644 index 0000000000..ae5a906a0d --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.reset_default_graph.md @@ -0,0 +1,10 @@ +### `tf.reset_default_graph()` {#reset_default_graph} + +Clears the default graph stack and resets the global default graph. + +NOTE: The default graph is a property of the current thread. This +function applies only to the current thread. Calling this function while +a `tf.Session` or `tf.InteractiveSession` is active will result in undefined +behavior. Using any previously created `tf.Operation` or `tf.Tensor` objects +after calling this function will result in undefined behavior. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.rsqrt.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.rsqrt.md new file mode 100644 index 0000000000..5e8b1bc917 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.rsqrt.md @@ -0,0 +1,16 @@ +### `tf.rsqrt(x, name=None)` {#rsqrt} + +Computes reciprocal of square root of x element-wise. + +I.e., \\(y = 1 / \sqrt{x}\\). + +##### Args: + + +* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `x`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.scalar_mul.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.scalar_mul.md new file mode 100644 index 0000000000..5af291597d --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.scalar_mul.md @@ -0,0 +1,23 @@ +### `tf.scalar_mul(scalar, x)` {#scalar_mul} + +Multiplies a scalar times a `Tensor` or `IndexedSlices` object. + +Intended for use in gradient code which might deal with `IndexedSlices` +objects, which are easy to multiply by a scalar but more expensive to +multiply with arbitrary tensors. + +##### Args: + + +* `scalar`: A 0-D scalar `Tensor`. Must have known shape. +* `x`: A `Tensor` or `IndexedSlices` to be scaled. + +##### Returns: + + `scalar * x` of the same type (`Tensor` or `IndexedSlices`) as `x`. + +##### Raises: + + +* `ValueError`: if scalar is not a 0-D `scalar`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.set_random_seed.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.set_random_seed.md new file mode 100644 index 0000000000..af817dbafa --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.set_random_seed.md @@ -0,0 +1,98 @@ +### `tf.set_random_seed(seed)` {#set_random_seed} + +Sets the graph-level random seed. + +Operations that rely on a random seed actually derive it from two seeds: +the graph-level and operation-level seeds. This sets the graph-level seed. + +Its interactions with operation-level seeds is as follows: + + 1. If neither the graph-level nor the operation seed is set: + A random seed is used for this op. + 2. If the graph-level seed is set, but the operation seed is not: + The system deterministically picks an operation seed in conjunction + with the graph-level seed so that it gets a unique random sequence. + 3. If the graph-level seed is not set, but the operation seed is set: + A default graph-level seed and the specified operation seed are used to + determine the random sequence. + 4. If both the graph-level and the operation seed are set: + Both seeds are used in conjunction to determine the random sequence. + +To illustrate the user-visible effects, consider these examples: + +To generate different sequences across sessions, set neither +graph-level nor op-level seeds: + +```python +a = tf.random_uniform([1]) +b = tf.random_normal([1]) + +print("Session 1") +with tf.Session() as sess1: + print(sess1.run(a)) # generates 'A1' + print(sess1.run(a)) # generates 'A2' + print(sess1.run(b)) # generates 'B1' + print(sess1.run(b)) # generates 'B2' + +print("Session 2") +with tf.Session() as sess2: + print(sess2.run(a)) # generates 'A3' + print(sess2.run(a)) # generates 'A4' + print(sess2.run(b)) # generates 'B3' + print(sess2.run(b)) # generates 'B4' +``` + +To generate the same repeatable sequence for an op across sessions, set the +seed for the op: + +```python +a = tf.random_uniform([1], seed=1) +b = tf.random_normal([1]) + +# Repeatedly running this block with the same graph will generate the same +# sequence of values for 'a', but different sequences of values for 'b'. +print("Session 1") +with tf.Session() as sess1: + print(sess1.run(a)) # generates 'A1' + print(sess1.run(a)) # generates 'A2' + print(sess1.run(b)) # generates 'B1' + print(sess1.run(b)) # generates 'B2' + +print("Session 2") +with tf.Session() as sess2: + print(sess2.run(a)) # generates 'A1' + print(sess2.run(a)) # generates 'A2' + print(sess2.run(b)) # generates 'B3' + print(sess2.run(b)) # generates 'B4' +``` + +To make the random sequences generated by all ops be repeatable across +sessions, set a graph-level seed: + +```python +tf.set_random_seed(1234) +a = tf.random_uniform([1]) +b = tf.random_normal([1]) + +# Repeatedly running this block with the same graph will generate different +# sequences of 'a' and 'b'. +print("Session 1") +with tf.Session() as sess1: + print(sess1.run(a)) # generates 'A1' + print(sess1.run(a)) # generates 'A2' + print(sess1.run(b)) # generates 'B1' + print(sess1.run(b)) # generates 'B2' + +print("Session 2") +with tf.Session() as sess2: + print(sess2.run(a)) # generates 'A1' + print(sess2.run(a)) # generates 'A2' + print(sess2.run(b)) # generates 'B1' + print(sess2.run(b)) # generates 'B2' +``` + +##### Args: + + +* `seed`: integer. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.size.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.size.md new file mode 100644 index 0000000000..67f1bc4885 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.size.md @@ -0,0 +1,24 @@ +### `tf.size(input, name=None)` {#size} + +Returns the size of a tensor. + +This operation returns an integer representing the number of elements in +`input`. + +For example: + +```prettyprint +# 't' is [[[1, 1,, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]] +size(t) ==> 12 +``` + +##### Args: + + +* `input`: A `Tensor`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` of type `int32`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.slice.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.slice.md new file mode 100644 index 0000000000..6da47df0b0 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.slice.md @@ -0,0 +1,47 @@ +### `tf.slice(input_, begin, size, name=None)` {#slice} + +Extracts a slice from a tensor. + +This operation extracts a slice of size `size` from a tensor `input` starting +at the location specified by `begin`. The slice `size` is represented as a +tensor shape, where `size[i]` is the number of elements of the 'i'th dimension +of `input` that you want to slice. The starting location (`begin`) for the +slice is represented as an offset in each dimension of `input`. In other +words, `begin[i]` is the offset into the 'i'th dimension of `input` that you +want to slice from. + +`begin` is zero-based; `size` is one-based. If `size[i]` is -1, +all remaining elements in dimension i are included in the +slice. In other words, this is equivalent to setting: + +`size[i] = input.dim_size(i) - begin[i]` + +This operation requires that: + +`0 <= begin[i] <= begin[i] + size[i] <= Di for i in [0, n]` + +For example: + +``` +# 'input' is [[[1, 1, 1], [2, 2, 2]], +# [[3, 3, 3], [4, 4, 4]], +# [[5, 5, 5], [6, 6, 6]]] +tf.slice(input, [1, 0, 0], [1, 1, 3]) ==> [[[3, 3, 3]]] +tf.slice(input, [1, 0, 0], [1, 2, 3]) ==> [[[3, 3, 3], + [4, 4, 4]]] +tf.slice(input, [1, 0, 0], [2, 1, 3]) ==> [[[3, 3, 3]], + [[5, 5, 5]]] +``` + +##### Args: + + +* `input_`: A `Tensor`. +* `begin`: An `int32` or `int64` `Tensor`. +* `size`: An `int32` or `int64` `Tensor`. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor` the same type as `input`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sparse_reorder.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sparse_reorder.md deleted file mode 100644 index 1e7b8fd857..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sparse_reorder.md +++ /dev/null @@ -1,41 +0,0 @@ -### `tf.sparse_reorder(sp_input, name=None)` {#sparse_reorder} - -Reorders a `SparseTensor` into the canonical, row-major ordering. - -Note that by convention, all sparse ops preserve the canonical ordering -along increasing dimension number. The only time ordering can be violated -is during manual manipulation of the indices and values to add entries. - -Reordering does not affect the shape of the `SparseTensor`. - -For example, if `sp_input` has shape `[4, 5]` and `indices` / `values`: - - [0, 3]: b - [0, 1]: a - [3, 1]: d - [2, 0]: c - -then the output will be a `SparseTensor` of shape `[4, 5]` and -`indices` / `values`: - - [0, 1]: a - [0, 3]: b - [2, 0]: c - [3, 1]: d - -##### Args: - - -* `sp_input`: The input `SparseTensor`. -* `name`: A name prefix for the returned tensors (optional) - -##### Returns: - - A `SparseTensor` with the same shape and non-empty values, but in - canonical ordering. - -##### Raises: - - -* `TypeError`: If `sp_input` is not a `SparseTensor`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sparse_segment_sqrt_n.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sparse_segment_sqrt_n.md new file mode 100644 index 0000000000..bc665a42a8 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sparse_segment_sqrt_n.md @@ -0,0 +1,26 @@ +### `tf.sparse_segment_sqrt_n(data, indices, segment_ids, name=None)` {#sparse_segment_sqrt_n} + +Computes the sum along sparse segments of a tensor divided by the sqrt of N. + +N is the size of the segment being reduced. + +Read [the section on +Segmentation](../../api_docs/python/math_ops.md#segmentation) for an explanation +of segments. + +##### Args: + + +* `data`: A `Tensor`. Must be one of the following types: `float32`, `float64`. +* `indices`: A `Tensor` of type `int32`. + A 1-D tensor. Has same rank as `segment_ids`. +* `segment_ids`: A `Tensor` of type `int32`. + A 1-D tensor. Values should be sorted and can be repeated. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `data`. + Has same shape as data, except for dimension 0 which + has size `k`, the number of segments. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sparse_segment_sqrt_n_grad.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sparse_segment_sqrt_n_grad.md new file mode 100644 index 0000000000..2a2e0c9e33 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sparse_segment_sqrt_n_grad.md @@ -0,0 +1,24 @@ +### `tf.sparse_segment_sqrt_n_grad(grad, indices, segment_ids, output_dim0, name=None)` {#sparse_segment_sqrt_n_grad} + +Computes gradients for SparseSegmentSqrtN. + +Returns tensor "output" with same shape as grad, except for dimension 0 whose +value is output_dim0. + +##### Args: + + +* `grad`: A `Tensor`. Must be one of the following types: `float32`, `float64`. + gradient propagated to the SparseSegmentSqrtN op. +* `indices`: A `Tensor` of type `int32`. + indices passed to the corresponding SparseSegmentSqrtN op. +* `segment_ids`: A `Tensor` of type `int32`. + segment_ids passed to the corresponding SparseSegmentSqrtN op. +* `output_dim0`: A `Tensor` of type `int32`. + dimension 0 of "data" passed to SparseSegmentSqrtN op. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `grad`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sparse_segment_sum.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sparse_segment_sum.md deleted file mode 100644 index 6691a6b7bc..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sparse_segment_sum.md +++ /dev/null @@ -1,50 +0,0 @@ -### `tf.sparse_segment_sum(data, indices, segment_ids, name=None)` {#sparse_segment_sum} - -Computes the sum along sparse segments of a tensor. - -Read [the section on -Segmentation](../../api_docs/python/math_ops.md#segmentation) for an explanation -of segments. - -Like `SegmentSum`, but `segment_ids` can have rank less than `data`'s first -dimension, selecting a subset of dimension 0, specified by `indices`. - -For example: - -```prettyprint -c = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]]) - -# Select two rows, one segment. -tf.sparse_segment_sum(c, tf.constant([0, 1]), tf.constant([0, 0])) - ==> [[0 0 0 0]] - -# Select two rows, two segment. -tf.sparse_segment_sum(c, tf.constant([0, 1]), tf.constant([0, 1])) - ==> [[ 1 2 3 4] - [-1 -2 -3 -4]] - -# Select all rows, two segments. -tf.sparse_segment_sum(c, tf.constant([0, 1, 2]), tf.constant([0, 0, 1])) - ==> [[0 0 0 0] - [5 6 7 8]] - -# Which is equivalent to: -tf.segment_sum(c, tf.constant([0, 0, 1])) -``` - -##### Args: - - -* `data`: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`. -* `indices`: A `Tensor` of type `int32`. - A 1-D tensor. Has same rank as `segment_ids`. -* `segment_ids`: A `Tensor` of type `int32`. - A 1-D tensor. Values should be sorted and can be repeated. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `data`. - Has same shape as data, except for dimension 0 which - has size `k`, the number of segments. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sparse_split.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sparse_split.md new file mode 100644 index 0000000000..e3e608a9e2 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sparse_split.md @@ -0,0 +1,40 @@ +### `tf.sparse_split(split_dim, num_split, sp_input, name=None)` {#sparse_split} + +Split a `SparseTensor` into `num_split` tensors along `split_dim`. + +If the `sp_input.shape[split_dim]` is not an integer multiple of `num_split` +each slice starting from 0:`shape[split_dim] % num_split` gets extra one +dimension. For example, if `split_dim = 1` and `num_split = 2` and the +input is: + + input_tensor = shape = [2, 7] + [ a d e ] + [b c ] + +Graphically the output tensors are: + + output_tensor[0] = + [ a ] + [b c ] + + output_tensor[1] = + [ d e ] + [ ] + +##### Args: + + +* `split_dim`: A 0-D `int32` `Tensor`. The dimension along which to split. +* `num_split`: A Python integer. The number of ways to split. +* `sp_input`: The `SparseTensor` to split. +* `name`: A name for the operation (optional). + +##### Returns: + + `num_split` `SparseTensor` objects resulting from splitting `value`. + +##### Raises: + + +* `TypeError`: If `sp_input` is not a `SparseTensor`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sparse_to_indicator.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sparse_to_indicator.md new file mode 100644 index 0000000000..8ee455be32 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sparse_to_indicator.md @@ -0,0 +1,52 @@ +### `tf.sparse_to_indicator(sp_input, vocab_size, name=None)` {#sparse_to_indicator} + +Converts a `SparseTensor` of ids into a dense bool indicator tensor. + +The last dimension of `sp_input.indices` is discarded and replaced with +the values of `sp_input`. If `sp_input.shape = [D0, D1, ..., Dn, K]`, then +`output.shape = [D0, D1, ..., Dn, vocab_size]`, where + + output[d_0, d_1, ..., d_n, sp_input[d_0, d_1, ..., d_n, k]] = True + +and False elsewhere in `output`. + +For example, if `sp_input.shape = [2, 3, 4]` with non-empty values: + + [0, 0, 0]: 0 + [0, 1, 0]: 10 + [1, 0, 3]: 103 + [1, 1, 2]: 150 + [1, 1, 3]: 149 + [1, 1, 4]: 150 + [1, 2, 1]: 121 + +and `vocab_size = 200`, then the output will be a `[2, 3, 200]` dense bool +tensor with False everywhere except at positions + + (0, 0, 0), (0, 1, 10), (1, 0, 103), (1, 1, 149), (1, 1, 150), + (1, 2, 121). + +Note that repeats are allowed in the input SparseTensor. +This op is useful for converting `SparseTensor`s into dense formats for +compatibility with ops that expect dense tensors. + +The input `SparseTensor` must be in row-major order. + +##### Args: + + +* `sp_input`: A `SparseTensor` with `values` property of type `int32` or + `int64`. +* `vocab_size`: A scalar int64 Tensor (or Python int) containing the new size + of the last dimension, `all(0 <= sp_input.values < vocab_size)`. +* `name`: A name prefix for the returned tensors (optional) + +##### Returns: + + A dense bool indicator tensor representing the indices with specified value. + +##### Raises: + + +* `TypeError`: If `sp_input` is not a `SparseTensor`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sqrt.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sqrt.md deleted file mode 100644 index 250817f3bf..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sqrt.md +++ /dev/null @@ -1,16 +0,0 @@ -### `tf.sqrt(x, name=None)` {#sqrt} - -Computes square root of x element-wise. - -I.e., \\(y = \sqrt{x} = x^{1/2}\\). - -##### Args: - - -* `x`: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `x`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.squeeze.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.squeeze.md new file mode 100644 index 0000000000..e76c02e115 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.squeeze.md @@ -0,0 +1,38 @@ +### `tf.squeeze(input, squeeze_dims=None, name=None)` {#squeeze} + +Removes dimensions of size 1 from the shape of a tensor. + +Given a tensor `input`, this operation returns a tensor of the same type with +all dimensions of size 1 removed. If you don't want to remove all size 1 +dimensions, you can remove specific size 1 dimensions by specifying +`squeeze_dims`. + +For example: + +```prettyprint +# 't' is a tensor of shape [1, 2, 1, 3, 1, 1] +shape(squeeze(t)) ==> [2, 3] +``` + +Or, to remove specific size 1 dimensions: + +```prettyprint +# 't' is a tensor of shape [1, 2, 1, 3, 1, 1] +shape(squeeze(t, [2, 4])) ==> [1, 2, 3, 1] +``` + +##### Args: + + +* `input`: A `Tensor`. The `input` to squeeze. +* `squeeze_dims`: An optional list of `ints`. Defaults to `[]`. + If specified, only squeezes the dimensions listed. The dimension + index starts at 0. It is an error to squeeze a dimension that is not 1. +* `name`: A name for the operation (optional). + +##### Returns: + + A `Tensor`. Has the same type as `input`. + Contains the same data as `input`, but has one or more dimensions of + size 1 removed. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.stop_gradient.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.stop_gradient.md deleted file mode 100644 index 53759f49ff..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.stop_gradient.md +++ /dev/null @@ -1,34 +0,0 @@ -### `tf.stop_gradient(input, name=None)` {#stop_gradient} - -Stops gradient computation. - -When executed in a graph, this op outputs its input tensor as-is. - -When building ops to compute gradients, this op prevents the contribution of -its inputs to be taken into account. Normally, the gradient generator adds ops -to a graph to compute the derivatives of a specified 'loss' by recursively -finding out inputs that contributed to its computation. If you insert this op -in the graph it inputs are masked from the gradient generator. They are not -taken into account for computing gradients. - -This is useful any time you want to compute a value with TensorFlow but need -to pretend that the value was a constant. Some examples include: - -* The *EM* algorithm where the *M-step* should not involve backpropagation - through the output of the *E-step*. -* Contrastive divergence training of Boltzmann machines where, when - differentiating the energy function, the training must not backpropagate - through the graph that generated the samples from the model. -* Adversarial training, where no backprop should happen through the adversarial - example generation process. - -##### Args: - - -* `input`: A `Tensor`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor`. Has the same type as `input`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.test.compute_gradient.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.test.compute_gradient.md new file mode 100644 index 0000000000..19b302d466 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.test.compute_gradient.md @@ -0,0 +1,40 @@ +### `tf.test.compute_gradient(x, x_shape, y, y_shape, x_init_value=None, delta=0.001, init_targets=None)` {#compute_gradient} + +Computes and returns the theoretical and numerical Jacobian. + +If `x` or `y` is complex, the Jacobian will still be real but the +corresponding Jacobian dimension(s) will be twice as large. This is required +even if both input and output is complex since TensorFlow graphs are not +necessarily holomorphic, and may have gradients not expressible as complex +numbers. For example, if `x` is complex with shape `[m]` and `y` is complex +with shape `[n]`, each Jacobian `J` will have shape `[m * 2, n * 2]` with + + J[:m, :n] = d(Re y)/d(Re x) + J[:m, n:] = d(Im y)/d(Re x) + J[m:, :n] = d(Re y)/d(Im x) + J[m:, n:] = d(Im y)/d(Im x) + +##### Args: + + +* `x`: a tensor or list of tensors +* `x_shape`: the dimensions of x as a tuple or an array of ints. If x is a list, + then this is the list of shapes. + +* `y`: a tensor +* `y_shape`: the dimensions of y as a tuple or an array of ints. +* `x_init_value`: (optional) a numpy array of the same shape as "x" + representing the initial value of x. If x is a list, this should be a list + of numpy arrays. If this is none, the function will pick a random tensor + as the initial value. +* `delta`: (optional) the amount of perturbation. +* `init_targets`: list of targets to run to initialize model params. + TODO(mrry): remove this argument. + +##### Returns: + + Two 2-d numpy arrays representing the theoretical and numerical + Jacobian for dy/dx. Each has "x_size" rows and "y_size" columns + where "x_size" is the number of elements in x and "y_size" is the + number of elements in y. If x is a list, returns a list of two numpy arrays. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.AdamOptimizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.AdamOptimizer.md new file mode 100644 index 0000000000..8667ec8ed3 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.AdamOptimizer.md @@ -0,0 +1,49 @@ +Optimizer that implements the Adam algorithm. + +See [Kingma et. al., 2014](http://arxiv.org/abs/1412.6980) +([pdf](http://arxiv.org/pdf/1412.6980.pdf)). + +- - - + +#### `tf.train.AdamOptimizer.__init__(learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08, use_locking=False, name='Adam')` {#AdamOptimizer.__init__} + +Construct a new Adam optimizer. + +Initialization: + +``` +m_0 <- 0 (Initialize initial 1st moment vector) +v_0 <- 0 (Initialize initial 2nd moment vector) +t <- 0 (Initialize timestep) +``` + +The update rule for `variable` with gradient `g` uses an optimization +described at the end of section2 of the paper: + +``` +t <- t + 1 +lr_t <- learning_rate * sqrt(1 - beta2^t) / (1 - beta1^t) + +m_t <- beta1 * m_{t-1} + (1 - beta1) * g +v_t <- beta2 * v_{t-1} + (1 - beta2) * g * g +variable <- variable - lr_t * m_t / (sqrt(v_t) + epsilon) +``` + +The default value of 1e-8 for epsilon might not be a good default in +general. For example, when training an Inception network on ImageNet a +current good choice is 1.0 or 0.1. + +##### Args: + + +* `learning_rate`: A Tensor or a floating point value. The learning rate. +* `beta1`: A float value or a constant float tensor. + The exponential decay rate for the 1st moment estimates. +* `beta2`: A float value or a constant float tensor. + The exponential decay rate for the 2nd moment estimates. +* `epsilon`: A small constant for numerical stability. +* `use_locking`: If True use locks for update operations. +* `name`: Optional name for the operations created when applying gradients. + Defaults to "Adam". + + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.ClusterSpec.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.ClusterSpec.md deleted file mode 100644 index c695781a86..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.ClusterSpec.md +++ /dev/null @@ -1,86 +0,0 @@ -Represents a cluster as a set of "tasks", organized into "jobs". - -A `tf.train.ClusterSpec` represents the set of processes that -participate in a distributed TensorFlow computation. Every -[`tf.train.Server`](#Server) is constructed in a particular cluster. - -To create a cluster with two jobs and five tasks, you specify the -mapping from job names to lists of network addresses (typically -hostname-port pairs). - -``` -cluster = tf.train.ClusterSpec({"worker": ["worker0.example.com:2222", - "worker1.example.com:2222", - "worker2.example.com:2222"], - "ps": ["ps0.example.com:2222", - "ps1.example.com:2222"]}) -``` - -- - - - -#### `tf.train.ClusterSpec.as_cluster_def()` {#ClusterSpec.as_cluster_def} - -Returns a `tf.train.ClusterDef` protocol buffer based on this cluster. - - -- - - - -#### `tf.train.ClusterSpec.as_dict()` {#ClusterSpec.as_dict} - -Returns a dictionary from job names to lists of network addresses. - - - -#### Other Methods -- - - - -#### `tf.train.ClusterSpec.__init__(cluster)` {#ClusterSpec.__init__} - -Creates a `ClusterSpec`. - -##### Args: - - -* `cluster`: A dictionary mapping one or more job names to lists of network - addresses, or a `tf.train.ClusterDef` protocol buffer. - -##### Raises: - - -* `TypeError`: If `cluster` is not a dictionary mapping strings to lists - of strings, and not a `tf.train.ClusterDef` protobuf. - - -- - - - -#### `tf.train.ClusterSpec.job_tasks(job_name)` {#ClusterSpec.job_tasks} - -Returns a list of tasks in the given job. - -##### Args: - - -* `job_name`: The string name of a job in this cluster. - -##### Returns: - - A list of strings, corresponding to the network addresses of tasks in - the given job, ordered by task index. - -##### Raises: - - -* `ValueError`: If `job_name` does not name a job in this cluster. - - -- - - - -#### `tf.train.ClusterSpec.jobs` {#ClusterSpec.jobs} - -Returns a list of job names in this cluster. - -##### Returns: - - A list of strings, corresponding to the names of jobs in this cluster. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.ExponentialMovingAverage.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.ExponentialMovingAverage.md deleted file mode 100644 index ea0aa48161..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.ExponentialMovingAverage.md +++ /dev/null @@ -1,229 +0,0 @@ -Maintains moving averages of variables by employing an exponential decay. - -When training a model, it is often beneficial to maintain moving averages of -the trained parameters. Evaluations that use averaged parameters sometimes -produce significantly better results than the final trained values. - -The `apply()` method adds shadow copies of trained variables and add ops that -maintain a moving average of the trained variables in their shadow copies. -It is used when building the training model. The ops that maintain moving -averages are typically run after each training step. -The `average()` and `average_name()` methods give access to the shadow -variables and their names. They are useful when building an evaluation -model, or when restoring a model from a checkpoint file. They help use the -moving averages in place of the last trained values for evaluations. - -The moving averages are computed using exponential decay. You specify the -decay value when creating the `ExponentialMovingAverage` object. The shadow -variables are initialized with the same initial values as the trained -variables. When you run the ops to maintain the moving averages, each -shadow variable is updated with the formula: - - `shadow_variable -= (1 - decay) * (shadow_variable - variable)` - -This is mathematically equivalent to the classic formula below, but the use -of an `assign_sub` op (the `"-="` in the formula) allows concurrent lockless -updates to the variables: - - `shadow_variable = decay * shadow_variable + (1 - decay) * variable` - -Reasonable values for `decay` are close to 1.0, typically in the -multiple-nines range: 0.999, 0.9999, etc. - -Example usage when creating a training model: - -```python -# Create variables. -var0 = tf.Variable(...) -var1 = tf.Variable(...) -# ... use the variables to build a training model... -... -# Create an op that applies the optimizer. This is what we usually -# would use as a training op. -opt_op = opt.minimize(my_loss, [var0, var1]) - -# Create an ExponentialMovingAverage object -ema = tf.train.ExponentialMovingAverage(decay=0.9999) - -# Create the shadow variables, and add ops to maintain moving averages -# of var0 and var1. -maintain_averages_op = ema.apply([var0, var1]) - -# Create an op that will update the moving averages after each training -# step. This is what we will use in place of the usual training op. -with tf.control_dependencies([opt_op]): - training_op = tf.group(maintain_averages_op) - -...train the model by running training_op... -``` - -There are two ways to use the moving averages for evaluations: - -* Build a model that uses the shadow variables instead of the variables. - For this, use the `average()` method which returns the shadow variable - for a given variable. -* Build a model normally but load the checkpoint files to evaluate by using - the shadow variable names. For this use the `average_name()` method. See - the [Saver class](../../api_docs/python/train.md#Saver) for more - information on restoring saved variables. - -Example of restoring the shadow variable values: - -```python -# Create a Saver that loads variables from their saved shadow values. -shadow_var0_name = ema.average_name(var0) -shadow_var1_name = ema.average_name(var1) -saver = tf.train.Saver({shadow_var0_name: var0, shadow_var1_name: var1}) -saver.restore(...checkpoint filename...) -# var0 and var1 now hold the moving average values -``` - -- - - - -#### `tf.train.ExponentialMovingAverage.__init__(decay, num_updates=None, name='ExponentialMovingAverage')` {#ExponentialMovingAverage.__init__} - -Creates a new ExponentialMovingAverage object. - -The `apply()` method has to be called to create shadow variables and add -ops to maintain moving averages. - -The optional `num_updates` parameter allows one to tweak the decay rate -dynamically. . It is typical to pass the count of training steps, usually -kept in a variable that is incremented at each step, in which case the -decay rate is lower at the start of training. This makes moving averages -move faster. If passed, the actual decay rate used is: - - `min(decay, (1 + num_updates) / (10 + num_updates))` - -##### Args: - - -* `decay`: Float. The decay to use. -* `num_updates`: Optional count of number of updates applied to variables. -* `name`: String. Optional prefix name to use for the name of ops added in - `apply()`. - - -- - - - -#### `tf.train.ExponentialMovingAverage.apply(var_list=None)` {#ExponentialMovingAverage.apply} - -Maintains moving averages of variables. - -`var_list` must be a list of `Variable` or `Tensor` objects. This method -creates shadow variables for all elements of `var_list`. Shadow variables -for `Variable` objects are initialized to the variable's initial value. -They will be added to the `GraphKeys.MOVING_AVERAGE_VARIABLES` collection. -For `Tensor` objects, the shadow variables are initialized to 0. - -shadow variables are created with `trainable=False` and added to the -`GraphKeys.ALL_VARIABLES` collection. They will be returned by calls to -`tf.all_variables()`. - -Returns an op that updates all shadow variables as described above. - -Note that `apply()` can be called multiple times with different lists of -variables. - -##### Args: - - -* `var_list`: A list of Variable or Tensor objects. The variables - and Tensors must be of types float32 or float64. - -##### Returns: - - An Operation that updates the moving averages. - -##### Raises: - - -* `TypeError`: If the arguments are not all float32 or float64. -* `ValueError`: If the moving average of one of the variables is already - being computed. - - -- - - - -#### `tf.train.ExponentialMovingAverage.average_name(var)` {#ExponentialMovingAverage.average_name} - -Returns the name of the `Variable` holding the average for `var`. - -The typical scenario for `ExponentialMovingAverage` is to compute moving -averages of variables during training, and restore the variables from the -computed moving averages during evaluations. - -To restore variables, you have to know the name of the shadow variables. -That name and the original variable can then be passed to a `Saver()` object -to restore the variable from the moving average value with: - `saver = tf.train.Saver({ema.average_name(var): var})` - -`average_name()` can be called whether or not `apply()` has been called. - -##### Args: - - -* `var`: A `Variable` object. - -##### Returns: - - A string: The name of the variable that will be used or was used - by the `ExponentialMovingAverage class` to hold the moving average of - `var`. - - -- - - - -#### `tf.train.ExponentialMovingAverage.average(var)` {#ExponentialMovingAverage.average} - -Returns the `Variable` holding the average of `var`. - -##### Args: - - -* `var`: A `Variable` object. - -##### Returns: - - A `Variable` object or `None` if the moving average of `var` - is not maintained.. - - -- - - - -#### `tf.train.ExponentialMovingAverage.variables_to_restore(moving_avg_variables=None)` {#ExponentialMovingAverage.variables_to_restore} - -Returns a map of names to `Variables` to restore. - -If a variable has a moving average, use the moving average variable name as -the restore name; otherwise, use the variable name. - -For example, - -```python - variables_to_restore = ema.variables_to_restore() - saver = tf.train.Saver(variables_to_restore) -``` - -Below is an example of such mapping: - -``` - conv/batchnorm/gamma/ExponentialMovingAverage: conv/batchnorm/gamma, - conv_4/conv2d_params/ExponentialMovingAverage: conv_4/conv2d_params, - global_step: global_step -``` - -##### Args: - - -* `moving_avg_variables`: a list of variables that require to use of the - moving variable name to be restored. If None, it will default to - variables.moving_average_variables() + variables.trainable_variables() - -##### Returns: - - A map from restore_names to variables. The restore_name can be the - moving_average version of the variable name if it exist, or the original - variable name. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.Optimizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.Optimizer.md deleted file mode 100644 index d5d8bb13dd..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.Optimizer.md +++ /dev/null @@ -1,255 +0,0 @@ -Base class for optimizers. - -This class defines the API to add Ops to train a model. You never use this -class directly, but instead instantiate one of its subclasses such as -`GradientDescentOptimizer`, `AdagradOptimizer`, or `MomentumOptimizer`. - -### Usage - -```python -# Create an optimizer with the desired parameters. -opt = GradientDescentOptimizer(learning_rate=0.1) -# Add Ops to the graph to minimize a cost by updating a list of variables. -# "cost" is a Tensor, and the list of variables contains tf.Variable -# objects. -opt_op = opt.minimize(cost, var_list=) -``` - -In the training program you will just have to run the returned Op. - -```python -# Execute opt_op to do one step of training: -opt_op.run() -``` - -### Processing gradients before applying them. - -Calling `minimize()` takes care of both computing the gradients and -applying them to the variables. If you want to process the gradients -before applying them you can instead use the optimizer in three steps: - -1. Compute the gradients with `compute_gradients()`. -2. Process the gradients as you wish. -3. Apply the processed gradients with `apply_gradients()`. - -Example: - -```python -# Create an optimizer. -opt = GradientDescentOptimizer(learning_rate=0.1) - -# Compute the gradients for a list of variables. -grads_and_vars = opt.compute_gradients(loss, ) - -# grads_and_vars is a list of tuples (gradient, variable). Do whatever you -# need to the 'gradient' part, for example cap them, etc. -capped_grads_and_vars = [(MyCapper(gv[0]), gv[1]) for gv in grads_and_vars] - -# Ask the optimizer to apply the capped gradients. -opt.apply_gradients(capped_grads_and_vars) -``` - -- - - - -#### `tf.train.Optimizer.__init__(use_locking, name)` {#Optimizer.__init__} - -Create a new Optimizer. - -This must be called by the constructors of subclasses. - -##### Args: - - -* `use_locking`: Bool. If True apply use locks to prevent concurrent updates - to variables. -* `name`: A non-empty string. The name to use for accumulators created - for the optimizer. - -##### Raises: - - -* `ValueError`: If name is malformed. - - - -- - - - -#### `tf.train.Optimizer.minimize(loss, global_step=None, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None)` {#Optimizer.minimize} - -Add operations to minimize `loss` by updating `var_list`. - -This method simply combines calls `compute_gradients()` and -`apply_gradients()`. If you want to process the gradient before applying -them call `compute_gradients()` and `apply_gradients()` explicitly instead -of using this function. - -##### Args: - - -* `loss`: A `Tensor` containing the value to minimize. -* `global_step`: Optional `Variable` to increment by one after the - variables have been updated. -* `var_list`: Optional list of `Variable` objects to update to minimize - `loss`. Defaults to the list of variables collected in the graph - under the key `GraphKeys.TRAINABLE_VARIABLES`. -* `gate_gradients`: How to gate the computation of gradients. Can be - `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`. -* `aggregation_method`: Specifies the method used to combine gradient terms. - Valid values are defined in the class `AggregationMethod`. -* `colocate_gradients_with_ops`: If True, try colocating gradients with - the corresponding op. -* `name`: Optional name for the returned operation. -* `grad_loss`: Optional. A `Tensor` holding the gradient computed for `loss`. - -##### Returns: - - An Operation that updates the variables in `var_list`. If `global_step` - was not `None`, that operation also increments `global_step`. - -##### Raises: - - -* `ValueError`: If some of the variables are not `Variable` objects. - - -- - - - -#### `tf.train.Optimizer.compute_gradients(loss, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None)` {#Optimizer.compute_gradients} - -Compute gradients of `loss` for the variables in `var_list`. - -This is the first part of `minimize()`. It returns a list -of (gradient, variable) pairs where "gradient" is the gradient -for "variable". Note that "gradient" can be a `Tensor`, an -`IndexedSlices`, or `None` if there is no gradient for the -given variable. - -##### Args: - - -* `loss`: A Tensor containing the value to minimize. -* `var_list`: Optional list of tf.Variable to update to minimize - `loss`. Defaults to the list of variables collected in the graph - under the key `GraphKey.TRAINABLE_VARIABLES`. -* `gate_gradients`: How to gate the computation of gradients. Can be - `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`. -* `aggregation_method`: Specifies the method used to combine gradient terms. - Valid values are defined in the class `AggregationMethod`. -* `colocate_gradients_with_ops`: If True, try colocating gradients with - the corresponding op. -* `grad_loss`: Optional. A `Tensor` holding the gradient computed for `loss`. - -##### Returns: - - A list of (gradient, variable) pairs. - -##### Raises: - - -* `TypeError`: If `var_list` contains anything else than `Variable` objects. -* `ValueError`: If some arguments are invalid. - - -- - - - -#### `tf.train.Optimizer.apply_gradients(grads_and_vars, global_step=None, name=None)` {#Optimizer.apply_gradients} - -Apply gradients to variables. - -This is the second part of `minimize()`. It returns an `Operation` that -applies gradients. - -##### Args: - - -* `grads_and_vars`: List of (gradient, variable) pairs as returned by - `compute_gradients()`. -* `global_step`: Optional `Variable` to increment by one after the - variables have been updated. -* `name`: Optional name for the returned operation. Default to the - name passed to the `Optimizer` constructor. - -##### Returns: - - An `Operation` that applies the specified gradients. If `global_step` - was not None, that operation also increments `global_step`. - -##### Raises: - - -* `TypeError`: If `grads_and_vars` is malformed. -* `ValueError`: If none of the variables have gradients. - - - -### Gating Gradients - -Both `minimize()` and `compute_gradients()` accept a `gate_gradient` argument -that controls the degree of parallelism during the application of the -gradients. - -The possible values are: `GATE_NONE`, `GATE_OP`, and `GATE_GRAPH`. - -`GATE_NONE`: Compute and apply gradients in parallel. This provides -the maximum parallelism in execution, at the cost of some non-reproducibility -in the results. For example the two gradients of `matmul` depend on the input -values: With `GATE_NONE` one of the gradients could be applied to one of the -inputs _before_ the other gradient is computed resulting in non-reproducible -results. - -`GATE_OP`: For each Op, make sure all gradients are computed before -they are used. This prevents race conditions for Ops that generate gradients -for multiple inputs where the gradients depend on the inputs. - -`GATE_GRAPH`: Make sure all gradients for all variables are computed -before any one of them is used. This provides the least parallelism but can -be useful if you want to process all gradients before applying any of them. - -### Slots - -Some optimizer subclasses, such as `MomentumOptimizer` and `AdagradOptimizer` -allocate and manage additional variables associated with the variables to -train. These are called Slots. Slots have names and you can ask the -optimizer for the names of the slots that it uses. Once you have a slot name -you can ask the optimizer for the variable it created to hold the slot value. - -This can be useful if you want to log debug a training algorithm, report stats -about the slots, etc. - -- - - - -#### `tf.train.Optimizer.get_slot_names()` {#Optimizer.get_slot_names} - -Return a list of the names of slots created by the `Optimizer`. - -See `get_slot()`. - -##### Returns: - - A list of strings. - - -- - - - -#### `tf.train.Optimizer.get_slot(var, name)` {#Optimizer.get_slot} - -Return a slot named `name` created for `var` by the Optimizer. - -Some `Optimizer` subclasses use additional variables. For example -`Momentum` and `Adagrad` use variables to accumulate updates. This method -gives access to these `Variable` objects if for some reason you need them. - -Use `get_slot_names()` to get the list of slot names created by the -`Optimizer`. - -##### Args: - - -* `var`: A variable passed to `minimize()` or `apply_gradients()`. -* `name`: A string. - -##### Returns: - - The `Variable` for the slot if it was created, `None` otherwise. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.QueueRunner.from_proto.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.QueueRunner.from_proto.md deleted file mode 100644 index 2caa8f769a..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.QueueRunner.from_proto.md +++ /dev/null @@ -1,4 +0,0 @@ -#### `tf.train.QueueRunner.from_proto(queue_runner_def)` {#QueueRunner.from_proto} - -Returns a `QueueRunner` object created from `queue_runner_def`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.QueueRunner.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.QueueRunner.md deleted file mode 100644 index 812dc2b5bd..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.QueueRunner.md +++ /dev/null @@ -1,161 +0,0 @@ -Holds a list of enqueue operations for a queue, each to be run in a thread. - -Queues are a convenient TensorFlow mechanism to compute tensors -asynchronously using multiple threads. For example in the canonical 'Input -Reader' setup one set of threads generates filenames in a queue; a second set -of threads read records from the files, processes them, and enqueues tensors -on a second queue; a third set of threads dequeues these input records to -construct batches and runs them through training operations. - -There are several delicate issues when running multiple threads that way: -closing the queues in sequence as the input is exhausted, correctly catching -and reporting exceptions, etc. - -The `QueueRunner`, combined with the `Coordinator`, helps handle these issues. -- - - - -#### `tf.train.QueueRunner.__init__(queue=None, enqueue_ops=None, close_op=None, cancel_op=None, queue_runner_def=None)` {#QueueRunner.__init__} - -Create a QueueRunner. - -On construction the `QueueRunner` adds an op to close the queue. That op -will be run if the enqueue ops raise exceptions. - -When you later call the `create_threads()` method, the `QueueRunner` will -create one thread for each op in `enqueue_ops`. Each thread will run its -enqueue op in parallel with the other threads. The enqueue ops do not have -to all be the same op, but it is expected that they all enqueue tensors in -`queue`. - -##### Args: - - -* `queue`: A `Queue`. -* `enqueue_ops`: List of enqueue ops to run in threads later. -* `close_op`: Op to close the queue. Pending enqueue ops are preserved. -* `cancel_op`: Op to close the queue and cancel pending enqueue ops. -* `queue_runner_def`: Optional `QueueRunnerDef` protocol buffer. If specified, - recreates the QueueRunner from its contents. `queue_runner_def` and the - other arguments are mutually exclusive. - -##### Raises: - - -* `ValueError`: If both `queue_runner_def` and `queue` are both specified. -* `ValueError`: If `queue` or `enqueue_ops` are not provided when not - restoring from `queue_runner_def`. - - -- - - - -#### `tf.train.QueueRunner.cancel_op` {#QueueRunner.cancel_op} - - - - -- - - - -#### `tf.train.QueueRunner.close_op` {#QueueRunner.close_op} - - - - -- - - - -#### `tf.train.QueueRunner.create_threads(sess, coord=None, daemon=False, start=False)` {#QueueRunner.create_threads} - -Create threads to run the enqueue ops. - -This method requires a session in which the graph was launched. It creates -a list of threads, optionally starting them. There is one thread for each -op passed in `enqueue_ops`. - -The `coord` argument is an optional coordinator, that the threads will use -to terminate together and report exceptions. If a coordinator is given, -this method starts an additional thread to close the queue when the -coordinator requests a stop. - -This method may be called again as long as all threads from a previous call -have stopped. - -##### Args: - - -* `sess`: A `Session`. -* `coord`: Optional `Coordinator` object for reporting errors and checking - stop conditions. -* `daemon`: Boolean. If `True` make the threads daemon threads. -* `start`: Boolean. If `True` starts the threads. If `False` the - caller must call the `start()` method of the returned threads. - -##### Returns: - - A list of threads. - -##### Raises: - - -* `RuntimeError`: If threads from a previous call to `create_threads()` are - still running. - - -- - - - -#### `tf.train.QueueRunner.enqueue_ops` {#QueueRunner.enqueue_ops} - - - - -- - - - -#### `tf.train.QueueRunner.exceptions_raised` {#QueueRunner.exceptions_raised} - -Exceptions raised but not handled by the `QueueRunner` threads. - -Exceptions raised in queue runner threads are handled in one of two ways -depending on whether or not a `Coordinator` was passed to -`create_threads()`: - -* With a `Coordinator`, exceptions are reported to the coordinator and - forgotten by the `QueueRunner`. -* Without a `Coordinator`, exceptions are captured by the `QueueRunner` and - made available in this `exceptions_raised` property. - -##### Returns: - - A list of Python `Exception` objects. The list is empty if no exception - was captured. (No exceptions are captured when using a Coordinator.) - - -- - - - -#### `tf.train.QueueRunner.from_proto(queue_runner_def)` {#QueueRunner.from_proto} - -Returns a `QueueRunner` object created from `queue_runner_def`. - - -- - - - -#### `tf.train.QueueRunner.name` {#QueueRunner.name} - -The string name of the underlying Queue. - - -- - - - -#### `tf.train.QueueRunner.queue` {#QueueRunner.queue} - - - - -- - - - -#### `tf.train.QueueRunner.to_proto()` {#QueueRunner.to_proto} - -Converts this `QueueRunner` to a `QueueRunnerDef` protocol buffer. - -##### Returns: - - A `QueueRunnerDef` protocol buffer. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.batch.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.batch.md new file mode 100644 index 0000000000..96142e0719 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.batch.md @@ -0,0 +1,68 @@ +### `tf.train.batch(tensors, batch_size, num_threads=1, capacity=32, enqueue_many=False, shapes=None, dynamic_pad=False, shared_name=None, name=None)` {#batch} + +Creates batches of tensors in `tensors`. + +The argument `tensors` can be a list or a dictionary of tensors. +The value returned by the function will be of the same type +as `tensors`. + +This function is implemented using a queue. A `QueueRunner` for the +queue is added to the current `Graph`'s `QUEUE_RUNNER` collection. + +If `enqueue_many` is `False`, `tensors` is assumed to represent a single +example. An input tensor with shape `[x, y, z]` will be output as a tensor +with shape `[batch_size, x, y, z]`. + +If `enqueue_many` is `True`, `tensors` is assumed to represent a batch of +examples, where the first dimension is indexed by example, and all members of +`tensor_list` should have the same size in the first dimension. If an input +tensor has shape `[*, x, y, z]`, the output will have shape `[batch_size, x, +y, z]`. The `capacity` argument controls the how long the prefetching is +allowed to grow the queues. + +The returned operation is a dequeue operation and will throw +`tf.errors.OutOfRangeError` if the input queue is exhausted. If this +operation is feeding another input queue, its queue runner will catch +this exception, however, if this operation is used in your main thread +you are responsible for catching this yourself. + +*N.B.:* If `dynamic_pad` is `False`, you must ensure that either +(i) the `shapes` argument is passed, or (ii) all of the tensors in +`tensors` must have fully-defined shapes. `ValueError` will be +raised if neither of these conditions holds. + +If `dynamic_pad` is `True`, it is sufficient that the *rank* of the +tensors is known, but individual dimensions may have shape `None`. +In this case, for each enqueue the dimensions with value `None` +may have a variable length; upon dequeue, the output tensors will be padded +on the right to the maximum shape of the tensors in the current minibatch. +For numbers, this padding takes value 0. For strings, this padding is +the empty string. See `PaddingFIFOQueue` for more info. + +##### Args: + + +* `tensors`: The list or dictionary of tensors to enqueue. +* `batch_size`: The new batch size pulled from the queue. +* `num_threads`: The number of threads enqueuing `tensor_list`. +* `capacity`: An integer. The maximum number of elements in the queue. +* `enqueue_many`: Whether each tensor in `tensor_list` is a single example. +* `shapes`: (Optional) The shapes for each example. Defaults to the + inferred shapes for `tensor_list`. +* `dynamic_pad`: Boolean. Allow variable dimensions in input shapes. + The given dimensions are padded upon dequeue so that tensors within a + batch have the same shapes. +* `shared_name`: (optional). If set, this queue will be shared under the given + name across multiple sessions. +* `name`: (Optional) A name for the operations. + +##### Returns: + + A list or dictionary of tensors with the same types as `tensors`. + +##### Raises: + + +* `ValueError`: If the `shapes` are not specified, and cannot be + inferred from the elements of `tensors`. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.export_meta_graph.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.export_meta_graph.md deleted file mode 100644 index c09e6783c6..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.export_meta_graph.md +++ /dev/null @@ -1,24 +0,0 @@ -### `tf.train.export_meta_graph(filename=None, meta_info_def=None, graph_def=None, saver_def=None, collection_list=None, as_text=False)` {#export_meta_graph} - -Returns `MetaGraphDef` proto. Optionally writes it to filename. - -This function exports the graph, saver, and collection objects into -`MetaGraphDef` protocol buffer with the intension of it being imported -at a later time or location to restart training, run inference, or be -a subgraph. - -##### Args: - - -* `filename`: Optional filename including the path for writing the - generated `MetaGraphDef` protocol buffer. -* `meta_info_def`: `MetaInfoDef` protocol buffer. -* `graph_def`: `GraphDef` protocol buffer. -* `saver_def`: `SaverDef` protocol buffer. -* `collection_list`: List of string keys to collect. -* `as_text`: If `True`, writes the `MetaGraphDef` as an ASCII proto. - -##### Returns: - - A `MetaGraphDef` proto. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.latest_checkpoint.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.latest_checkpoint.md new file mode 100644 index 0000000000..b1fc87cdd7 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.latest_checkpoint.md @@ -0,0 +1,16 @@ +### `tf.train.latest_checkpoint(checkpoint_dir, latest_filename=None)` {#latest_checkpoint} + +Finds the filename of latest saved checkpoint file. + +##### Args: + + +* `checkpoint_dir`: Directory where the variables were saved. +* `latest_filename`: Optional name for the protocol buffer file that + contains the list of most recent checkpoint filenames. + See the corresponding argument to `Saver.save()`. + +##### Returns: + + The full path to the latest checkpoint or `None` if no checkpoint was found. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.update_checkpoint_state.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.update_checkpoint_state.md new file mode 100644 index 0000000000..68747fc0c7 --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.update_checkpoint_state.md @@ -0,0 +1,24 @@ +### `tf.train.update_checkpoint_state(save_dir, model_checkpoint_path, all_model_checkpoint_paths=None, latest_filename=None)` {#update_checkpoint_state} + +Updates the content of the 'checkpoint' file. + +This updates the checkpoint file containing a CheckpointState +proto. + +##### Args: + + +* `save_dir`: Directory where the model was saved. +* `model_checkpoint_path`: The checkpoint file. +* `all_model_checkpoint_paths`: List of strings. Paths to all not-yet-deleted + checkpoints, sorted from oldest to newest. If this is a non-empty list, + the last element must be equal to model_checkpoint_path. These paths + are also saved in the CheckpointState proto. +* `latest_filename`: Optional name of the checkpoint file. Default to + 'checkpoint'. + +##### Raises: + + +* `RuntimeError`: If the save paths conflict. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.uniform_unit_scaling_initializer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.uniform_unit_scaling_initializer.md deleted file mode 100644 index 6033fbf53a..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.uniform_unit_scaling_initializer.md +++ /dev/null @@ -1,47 +0,0 @@ -### `tf.uniform_unit_scaling_initializer(factor=1.0, seed=None, dtype=tf.float32, full_shape=None)` {#uniform_unit_scaling_initializer} - -Returns an initializer that generates tensors without scaling variance. - -When initializing a deep network, it is in principle advantageous to keep -the scale of the input variance constant, so it does not explode or diminish -by reaching the final layer. If the input is `x` and the operation `x * W`, -and we want to initialize `W` uniformly at random, we need to pick `W` from - - [-sqrt(3) / sqrt(dim), sqrt(3) / sqrt(dim)] - -to keep the scale intact, where `dim = W.shape[0]` (the size of the input). -A similar calculation for convolutional networks gives an analogous result -with `dim` equal to the product of the first 3 dimensions. When -nonlinearities are present, we need to multiply this by a constant `factor`. -See [Sussillo et al., 2014](https://arxiv.org/abs/1412.6558) -([pdf](http://arxiv.org/pdf/1412.6558.pdf)) for deeper motivation, experiments -and the calculation of constants. In section 2.3 there, the constants were -numerically computed: for a linear layer it's 1.0, relu: ~1.43, tanh: ~1.15. - -If the shape tuple `full_shape` is provided, the scale will be calculated from -this predefined shape. This is useful when a `Variable` is being partitioned -across several shards, and each shard has a smaller shape than the whole. -Since the shards are usually concatenated when used, the scale should be -based on the shape of the whole. - -##### Args: - - -* `factor`: Float. A multiplicative factor by which the values will be scaled. -* `seed`: A Python integer. Used to create random seeds. See - [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) - for behavior. -* `dtype`: The data type. Only floating point types are supported. -* `full_shape`: Tuple or list of integers. The shape used for calculating - scale normalization (instead of the shape passed at creation time). - Useful when creating sharded variables via partitioning. - -##### Returns: - - An initializer that generates tensors with unit variance. - -##### Raises: - - -* `ValueError`: if `dtype` is not a floating point type. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.where.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.where.md deleted file mode 100644 index eae2259721..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.where.md +++ /dev/null @@ -1,46 +0,0 @@ -### `tf.where(input, name=None)` {#where} - -Returns locations of true values in a boolean tensor. - -This operation returns the coordinates of true elements in `input`. The -coordinates are returned in a 2-D tensor where the first dimension (rows) -represents the number of true elements, and the second dimension (columns) -represents the coordinates of the true elements. Keep in mind, the shape of -the output tensor can vary depending on how many true values there are in -`input`. Indices are output in row-major order. - -For example: - -```prettyprint -# 'input' tensor is [[True, False] -# [True, False]] -# 'input' has two true values, so output has two coordinates. -# 'input' has rank of 2, so coordinates have two indices. -where(input) ==> [[0, 0], - [1, 0]] - -# `input` tensor is [[[True, False] -# [True, False]] -# [[False, True] -# [False, True]] -# [[False, False] -# [False, True]]] -# 'input' has 5 true values, so output has 5 coordinates. -# 'input' has rank of 3, so coordinates have three indices. -where(input) ==> [[0, 0, 0], - [0, 1, 0], - [1, 0, 1], - [1, 1, 1], - [2, 1, 1]] -``` - -##### Args: - - -* `input`: A `Tensor` of type `bool`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` of type `int64`. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.zeros.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.zeros.md deleted file mode 100644 index 57598a372d..0000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.zeros.md +++ /dev/null @@ -1,24 +0,0 @@ -### `tf.zeros(shape, dtype=tf.float32, name=None)` {#zeros} - -Creates a tensor with all elements set to zero. - -This operation returns a tensor of type `dtype` with shape `shape` and -all elements set to zero. - -For example: - -```python -tf.zeros([3, 4], int32) ==> [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]] -``` - -##### Args: - - -* `shape`: Either a list of integers, or a 1-D `Tensor` of type `int32`. -* `dtype`: The type of an element in the resulting `Tensor`. -* `name`: A name for the operation (optional). - -##### Returns: - - A `Tensor` with all elements set to zero. - -- cgit v1.2.3