aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/contrib/slim/README.md
diff options
context:
space:
mode:
Diffstat (limited to 'tensorflow/contrib/slim/README.md')
-rw-r--r--tensorflow/contrib/slim/README.md41
1 files changed, 20 insertions, 21 deletions
diff --git a/tensorflow/contrib/slim/README.md b/tensorflow/contrib/slim/README.md
index 7c89dab7d1..1c192076ce 100644
--- a/tensorflow/contrib/slim/README.md
+++ b/tensorflow/contrib/slim/README.md
@@ -99,7 +99,7 @@ normal distribution, regularize it with an `l2_loss` and place it on the `CPU`,
one need only declare the following:
```python
-weights = variables.variable('weights',
+weights = slim.variable('weights',
shape=[10, 10, 3 , 3],
initializer=tf.truncated_normal_initializer(stddev=0.1),
regularizer=slim.l2_regularizer(0.05),
@@ -361,11 +361,11 @@ One can also nest `arg_scopes` and use multiple operations in the same scope.
For example:
```python
- with slim.arg_scope([slim.conv2d, slim.fully_connected],
+with slim.arg_scope([slim.conv2d, slim.fully_connected],
activation_fn=tf.nn.relu,
weights_initializer=tf.truncated_normal_initializer(stddev=0.01),
weights_regularizer=slim.l2_regularizer(0.0005)):
- with arg_scope([slim.conv2d], stride=1, padding='SAME'):
+ with slim.arg_scope([slim.conv2d], stride=1, padding='SAME'):
net = slim.conv2d(inputs, 64, [11, 11], 4, padding='VALID', scope='conv1')
net = slim.conv2d(net, 256, [5, 5],
weights_initializer=tf.truncated_normal_initializer(stddev=0.03),
@@ -450,7 +450,7 @@ images, labels = ...
predictions = vgg.vgg16(images)
# Define the loss functions and get the total loss.
-loss = losses.softmax_cross_entropy(predictions, labels)
+loss = slim.losses.softmax_cross_entropy(predictions, labels)
```
In this example, we start by creating the model (using TF-Slim's VGG
@@ -477,7 +477,7 @@ total_loss = slim.losses.get_total_loss(add_regularization_losses=False)
In this example, we have two losses which we add by calling
`slim.losses.softmax_cross_entropy` and `slim.losses.sum_of_squares`. We can
obtain the total loss by adding them together (`total_loss`) or by calling
-`slim.losses.GetTotalLoss()`. How did this work?
+`slim.losses.get_total_loss()`. How did this work?
When you create a loss function via TF-Slim, TF-Slim adds the loss to a
special TensorFlow collection of loss functions. This enables you to either
manage the total loss manually, or allow TF-Slim to manage them for you.
@@ -566,11 +566,10 @@ vgg = tf.contrib.slim.nets.vgg
...
train_log_dir = ...
-if not gfile.Exists(train_log_dir):
- gfile.MakeDirs(train_log_dir)
+if not tf.gfile.Exists(train_log_dir):
+ tf.gfile.MakeDirs(train_log_dir)
-g = tf.Graph()
-with g.as_default():
+with tf.Graph().as_default():
# Set up the data loading:
images, labels = ...
@@ -581,7 +580,7 @@ with g.as_default():
slim.losses.softmax_cross_entropy(predictions, labels)
total_loss = slim.losses.get_total_loss()
- tf.summary.scalar('losses/total loss', total_loss)
+ tf.summary.scalar('losses/total_loss', total_loss)
# Specify the optimization scheme:
optimizer = tf.train.GradientDescentOptimizer(learning_rate=.001)
@@ -638,8 +637,8 @@ helper functions to select a subset of variables to restore:
```python
# Create some variables.
-v1 = slim.variables.variable(name="v1", ...)
-v2 = slim.variables.variable(name="nested/v2", ...)
+v1 = slim.variable(name="v1", ...)
+v2 = slim.variable(name="nested/v2", ...)
...
# Get list of variables to restore (which contains only 'v2'). These are all
@@ -748,7 +747,7 @@ We define a metric to be a performance measure that is not a loss function
(losses are directly optimized during training), but which we are still
interested in for the purpose of evaluating our model.
For example, we might want to minimize log loss, but our metrics of interest
-might be F1 score, or Intersection Over Union score (which are not
+might be F1 score (test accuracy), or Intersection Over Union score (which are not
differentiable, and therefore cannot be used as losses).
TF-Slim provides a set of metric operations that makes evaluating models
@@ -775,8 +774,8 @@ set (upon which the loss is computed), we'll assume we're using test data:
images, labels = LoadTestData(...)
predictions = MyModel(images)
-mae_value_op, mae_update_op = slim.metrics.mean_absolute_error(predictions, labels)
-mre_value_op, mre_update_op = slim.metrics.mean_relative_error(predictions, labels, labels)
+mae_value_op, mae_update_op = slim.metrics.streaming_mean_absolute_error(predictions, labels)
+mre_value_op, mre_update_op = slim.metrics.streaming_mean_relative_error(predictions, labels, labels)
pl_value_op, pl_update_op = slim.metrics.percentage_less(mean_relative_errors, 0.3)
```
@@ -793,13 +792,13 @@ this, TF-Slim provides two convenience functions:
# Aggregates the value and update ops in two lists:
value_ops, update_ops = slim.metrics.aggregate_metrics(
- slim.metrics.mean_absolute_error(predictions, labels),
- slim.metrics.mean_squared_error(predictions, labels))
+ slim.metrics.streaming_mean_absolute_error(predictions, labels),
+ slim.metrics.streaming_mean_squared_error(predictions, labels))
# Aggregates the value and update ops in two dictionaries:
names_to_values, names_to_updates = slim.metrics.aggregate_metric_map({
- "eval/mean_absolute_error": slim.metrics.mean_absolute_error(predictions, labels),
- "eval/mean_squared_error": slim.metrics.mean_squared_error(predictions, labels),
+ "eval/mean_absolute_error": slim.metrics.streaming_mean_absolute_error(predictions, labels),
+ "eval/mean_squared_error": slim.metrics.streaming_mean_squared_error(predictions, labels),
})
```
@@ -823,8 +822,8 @@ predictions = vgg.vgg_16(images)
# Choose the metrics to compute:
names_to_values, names_to_updates = slim.metrics.aggregate_metric_map({
- "eval/mean_absolute_error": slim.metrics.mean_absolute_error(predictions, labels),
- "eval/mean_squared_error": slim.metrics.mean_squared_error(predictions, labels),
+ "eval/mean_absolute_error": slim.metrics.streaming_mean_absolute_error(predictions, labels),
+ "eval/mean_squared_error": slim.metrics.streaming_mean_squared_error(predictions, labels),
})
# Evaluate the model using 1000 batches of data: