aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/docs_src/guide/embedding.md
blob: 6007e6847b0e53ad6a839035c55a4431465db7bf (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
# Embeddings

This document introduces the concept of embeddings, gives a simple example of
how to train an embedding in TensorFlow, and explains how to view embeddings
with the TensorBoard Embedding Projector
([live example](http://projector.tensorflow.org)). The first two parts target
newcomers to machine learning or TensorFlow, and the Embedding Projector how-to
is for users at all levels.

An alternative tutorial on these concepts is available in the
[Embeddings section of Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/embeddings/video-lecture).

[TOC]

An **embedding** is a mapping from discrete objects, such as words, to vectors
of real numbers. For example, a 300-dimensional embedding for English words
could include:

```
blue:  (0.01359, 0.00075997, 0.24608, ..., -0.2524, 1.0048, 0.06259)
blues:  (0.01396, 0.11887, -0.48963, ..., 0.033483, -0.10007, 0.1158)
orange:  (-0.24776, -0.12359, 0.20986, ..., 0.079717, 0.23865, -0.014213)
oranges:  (-0.35609, 0.21854, 0.080944, ..., -0.35413, 0.38511, -0.070976)
```

The individual dimensions in these vectors typically have no inherent meaning.
Instead, it's the overall patterns of location and distance between vectors
that machine learning takes advantage of.

Embeddings are important for input to machine learning. Classifiers, and neural
networks more generally, work on vectors of real numbers. They train best on
dense vectors, where all values contribute to define an object. However, many
important inputs to machine learning, such as words of text, do not have a
natural vector representation. Embedding functions are the standard and
effective way to transform such discrete input objects into useful
continuous vectors.

Embeddings are also valuable as outputs of machine learning. Because embeddings
map objects to vectors, applications can use similarity in vector space (for
instance, Euclidean distance or the angle between vectors) as a robust and
flexible measure of object similarity. One common use is to find nearest
neighbors.  Using the same word embeddings as above, for instance, here are the
three nearest neighbors for each word and the corresponding angles:

```
blue:  (red, 47.6°), (yellow, 51.9°), (purple, 52.4°)
blues:  (jazz, 53.3°), (folk, 59.1°), (bluegrass, 60.6°)
orange:  (yellow, 53.5°), (colored, 58.0°), (bright, 59.9°)
oranges:  (apples, 45.3°), (lemons, 48.3°), (mangoes, 50.4°)
```

This would tell an application that apples and oranges are in some way more
similar (45.3° apart) than lemons and oranges (48.3° apart).

## Embeddings in TensorFlow

To create word embeddings in TensorFlow, we first split the text into words
and then assign an integer to every word in the vocabulary. Let us assume that
this has already been done, and that `word_ids` is a vector of these integers.
For example, the sentence “I have a cat.” could be split into
`[“I”, “have”, “a”, “cat”, “.”]` and then the corresponding `word_ids` tensor
would have shape `[5]` and consist of 5 integers. To map these word ids
to vectors, we need to create the embedding variable and use the
`tf.nn.embedding_lookup` function as follows:

```
word_embeddings = tf.get_variable(“word_embeddings”,
    [vocabulary_size, embedding_size])
embedded_word_ids = tf.nn.embedding_lookup(word_embeddings, word_ids)
```

After this, the tensor `embedded_word_ids` will have shape `[5, embedding_size]`
in our example and contain the embeddings (dense vectors) for each of the 5
words. At the end of training, `word_embeddings` will contain the embeddings
for all words in the vocabulary.

Embeddings can be trained in many network types, and with various loss
functions and data sets. For example, one could use a recurrent neural network
to predict the next word from the previous one given a large corpus of
sentences, or one could train two networks to do multi-lingual translation.
These methods are described in the [Vector Representations of Words](../tutorials/representation/word2vec.md)
tutorial.

## Visualizing Embeddings

TensorBoard includes the **Embedding Projector**, a tool that lets you
interactively visualize embeddings. This tool can read embeddings from your
model and render them in two or three dimensions.

The Embedding Projector has three panels:

- *Data panel* on the top left, where you can choose the run, the embedding
  variable and data columns to color and label points by.
- *Projections panel* on the bottom left, where you can choose the type of
  projection.
- *Inspector panel* on the right side, where you can search for particular
  points and see a list of nearest neighbors.

### Projections
The Embedding Projector provides three ways to reduce the dimensionality of a
data set.

- *[t-SNE](https://en.wikipedia.org/wiki/T-distributed_stochastic_neighbor_embedding)*:
  a nonlinear nondeterministic algorithm (T-distributed stochastic neighbor
  embedding) that tries to preserve local neighborhoods in the data, often at
  the expense of distorting global structure. You can choose whether to compute
  two- or three-dimensional projections.

- *[PCA](https://en.wikipedia.org/wiki/Principal_component_analysis)*:
  a linear deterministic algorithm (principal component analysis) that tries to
  capture as much of the data variability in as few dimensions as possible. PCA
  tends to highlight large-scale structure in the data, but can distort local
  neighborhoods. The Embedding Projector computes the top 10 principal
  components, from which you can choose two or three to view.

- *Custom*: a linear projection onto horizontal and vertical axes that you
  specify using labels in the data. You define the horizontal axis, for
  instance, by giving text patterns for "Left" and "Right". The Embedding
  Projector finds all points whose label matches the "Left" pattern and
  computes the centroid of that set; similarly for "Right".  The line passing
  through these two centroids defines the horizontal axis. The vertical axis is
  likewise computed from the centroids for points matching the "Up" and "Down"
  text patterns.

Further useful articles are
[How to Use t-SNE Effectively](https://distill.pub/2016/misread-tsne/) and
[Principal Component Analysis Explained Visually](http://setosa.io/ev/principal-component-analysis/).

### Exploration

You can explore visually by zooming, rotating, and panning using natural
click-and-drag gestures. Hovering your mouse over a point will show any
[metadata](#metadata) for that point.  You can also inspect nearest-neighbor
subsets.  Clicking on a point causes the right pane to list the nearest
neighbors, along with distances to the current point. The nearest-neighbor
points are also highlighted in the projection.

It is sometimes useful to restrict the view to a subset of points and perform
projections only on those points. To do so, you can select points in multiple
ways:

- After clicking on a point, its nearest neighbors are also selected.
- After a search, the points matching the query are selected.
- Enabling selection, clicking on a point and dragging defines a selection
  sphere.

Then click the "Isolate *nnn* points" button at the top of the Inspector pane
on the right hand side. The following image shows 101 points selected and ready
for the user to click "Isolate 101 points":

![Selection of nearest neighbors](https://www.tensorflow.org/images/embedding-nearest-points.png "Selection of nearest neighbors")

*Selection of the nearest neighbors of “important” in a word embedding dataset.*

Advanced tip: filtering with custom projection can be powerful. Below, we
filtered the 100 nearest neighbors of “politics” and projected them onto the
“worst” - “best” vector as an x axis. The y axis is random. As a result, one
finds on the right side “ideas”, “science”, “perspective”, “journalism” but on
the left “crisis”, “violence” and “conflict”.

<table width="100%;">
  <tr>
    <td style="width: 30%;">
      <img src="https://www.tensorflow.org/images/embedding-custom-controls.png" alt="Custom controls panel" title="Custom controls panel" />
    </td>
    <td style="width: 70%;">
      <img src="https://www.tensorflow.org/images/embedding-custom-projection.png" alt="Custom projection" title="Custom projection" />
    </td>
  </tr>
  <tr>
    <td style="width: 30%;">
      Custom projection controls.
    </td>
    <td style="width: 70%;">
      Custom projection of neighbors of "politics" onto "best" - "worst" vector.
    </td>
  </tr>
</table>

To share your findings, you can use the bookmark panel in the bottom right
corner and save the current state (including computed coordinates of any
projection) as a small file. The Projector can then be pointed to a set of one
or more of these files, producing the panel below. Other users can then walk
through a sequence of bookmarks.

<img src="https://www.tensorflow.org/images/embedding-bookmark.png" alt="Bookmark panel" style="width:300px;">

### Metadata

If you are working with an embedding, you'll probably want to attach
labels/images to the data points. You can do this by generating a metadata file
containing the labels for each point and clicking "Load data" in the data panel
of the Embedding Projector.

The metadata can be either labels or images, which are
stored in a separate file. For labels, the format should
be a [TSV file](https://en.wikipedia.org/wiki/Tab-separated_values)
(tab characters shown in red) whose first line contains column headers
(shown in bold) and subsequent lines contain the metadata values. For example:

<code>
<b>Word<span style="color:#800;">\t</span>Frequency</b><br/>
  Airplane<span style="color:#800;">\t</span>345<br/>
  Car<span style="color:#800;">\t</span>241<br/>
  ...
</code>

The order of lines in the metadata file is assumed to match the order of
vectors in the embedding variable, except for the header.  Consequently, the
(i+1)-th line in the metadata file corresponds to the i-th row of the embedding
variable.  If the TSV metadata file has only a single column, then we don’t
expect a header row, and assume each row is the label of the embedding. We
include this exception because it matches the commonly-used "vocab file"
format.

To use images as metadata, you must produce a single
[sprite image](https://www.google.com/webhp#q=what+is+a+sprite+image),
consisting of small thumbnails, one for each vector in the embedding.  The
sprite should store thumbnails in row-first order: the first data point placed
in the top left and the last data point in the bottom right, though the last
row doesn't have to be filled, as shown below.

<table style="border: none;">
<tr style="background-color: transparent;">
  <td style="border: 1px solid black">0</td>
  <td style="border: 1px solid black">1</td>
  <td style="border: 1px solid black">2</td>
</tr>
<tr style="background-color: transparent;">
  <td style="border: 1px solid black">3</td>
  <td style="border: 1px solid black">4</td>
  <td style="border: 1px solid black">5</td>
</tr>
<tr style="background-color: transparent;">
  <td style="border: 1px solid black">6</td>
  <td style="border: 1px solid black">7</td>
  <td style="border: 1px solid black"></td>
</tr>
</table>

Follow [this link](https://www.tensorflow.org/images/embedding-mnist.mp4)
to see a fun example of thumbnail images in the Embedding Projector.


## Mini-FAQ

**Is "embedding" an action or a thing?**
Both. People talk about embedding words in a vector space (action) and about
producing word embeddings (things).  Common to both is the notion of embedding
as a mapping from discrete objects to vectors. Creating or applying that
mapping is an action, but the mapping itself is a thing.

**Are embeddings high-dimensional or low-dimensional?**
It depends. A 300-dimensional vector space of words and phrases, for instance,
is often called low-dimensional (and dense) when compared to the millions of
words and phrases it can contain. But mathematically it is high-dimensional,
displaying many properties that are dramatically different from what our human
intuition has learned about 2- and 3-dimensional spaces.

**Is an embedding the same as an embedding layer?**
No. An *embedding layer* is a part of neural network, but an *embedding* is a more
general concept.