From 3d1f89ce1fbbe656fa7fd711d964080f88688966 Mon Sep 17 00:00:00 2001 From: Matt Dodge Date: Fri, 27 Jul 2018 12:50:09 -0700 Subject: Use correct hash_bucket_size parameter `s/hash_buckets_size/hash_bucket_size/` since that is the correct argument spelling for the Python method. --- tensorflow/docs_src/guide/feature_columns.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) (limited to 'tensorflow/docs_src') diff --git a/tensorflow/docs_src/guide/feature_columns.md b/tensorflow/docs_src/guide/feature_columns.md index 41080e050b..38760df82b 100644 --- a/tensorflow/docs_src/guide/feature_columns.md +++ b/tensorflow/docs_src/guide/feature_columns.md @@ -289,7 +289,7 @@ pseudocode: ```python # pseudocode -feature_id = hash(raw_feature) % hash_buckets_size +feature_id = hash(raw_feature) % hash_bucket_size ``` The code to create the `feature_column` might look something like this: @@ -298,7 +298,7 @@ The code to create the `feature_column` might look something like this: hashed_feature_column = tf.feature_column.categorical_column_with_hash_bucket( key = "some_feature", - hash_buckets_size = 100) # The number of categories + hash_bucket_size = 100) # The number of categories ``` At this point, you might rightfully think: "This is crazy!" After all, we are forcing the different input values to a smaller set of categories. This means -- cgit v1.2.3