Word Embeddings

A key idea in the examination of text concerns representing words as numeric quantities. There are a number of ways to go about this, and we’ve actually already done so. In the sentiment analysis section words were given a sentiment score. In topic modeling, words were represented as frequencies across documents. Once we get to a numeric representation, we can then run statistical models.

Consider topic modeling again. We take the document-term matrix, and reduce the dimensionality of it to just a few topics. Now consider a co-occurrence matrix, where if there are \(k\) words, it is a \(k\) x \(k\) matrix, where the diagonal values tell us how frequently wordi occurs with wordj. Just like in topic modeling, we could now perform some matrix factorization technique to reduce the dimensionality of the matrix10. Now for each word we have a vector of numeric values (across factors) to represent them. Indeed, this is how some earlier approaches were done, for example, using principal components analysis on the co-occurrence matrix.

Newer techniques such as word2vec and GloVe use neural net approaches to construct word vectors. The details are not important for applied users to benefit from them. Furthermore, applications have been made to create sentence and other vector representations11. In any case, with vector representations of words we can see how similar they are to each other, and perform other tasks based on that information.

A tired example from the literature is as follows: \[\mathrm{king - man + woman = queen}\]

So a woman-king is a queen.

Here is another example:

\[\mathrm{Paris - France + Germany = Berlin}\]

Berlin is the Paris of Germany.

The idea is that with vectors created just based on co-occurrence we can recover things like analogies. Subtracting the man vector from the king vector and adding woman, the most similar word to this would be queen. For more on why this works, take a look here.

Shakespeare example

We start with some already tokenized data from the works of Shakespeare. We’ll treat the words as if they just come from one big Shakespeare document, and only consider the words as tokens, as opposed to using n-grams. We create an iterator object for text2vec functions to use, and with that in hand, create the vocabulary, keeping only those that occur at least 5 times. This example generally follows that of the package vignette, which you’ll definitely want to spend some time with.


Let’s take a look at what we have at this point. We’ve just created word counts, that’s all the vocabulary object is.

Number of docs: 1 
0 stopwords:  ... 
ngram_min = 1; ngram_max = 1 
Vocabulary: 
            term term_count doc_count
   1:   bounties          5         1
   2:        rag          5         1
   3: merchant's          5         1
   4: ungovern'd          5         1
   5:   cozening          5         1
  ---                                
9090:         of      17784         1
9091:         to      20693         1
9092:          i      21097         1
9093:        and      26032         1
9094:        the      28831         1

The next step is to create the token co-occurrence matrix (TCM). The definition of whether two words occur together is arbitrary. Should we just look at previous and next word? Five behind and forward? This will definitely affect results so you will want to play around with it.

Note that such a matrix will be extremely sparse. Most words do not go with other words in the grand scheme of things. So when they do, it usually matters.

Now we are ready to create the word vectors based on the GloVe model. Various options exist, so you’ll want to dive into the associated help files and perhaps the original articles to see how you might play around with it. The following takes roughly a minute or two on my machine. I suggest you start with n_iter = 10 and/or convergence_tol = 0.001 to gauge how long you might have to wait.

In this setting, we can think of our word of interest as the target, and any/all other words (within the window) as the context. Word vectors are learned for both.

Now we can start to play. The measure of interest in comparing two vectors will be cosine similarity, which, if you’re not familiar, you can think of it similarly to the standard correlation12. Let’s see what is similar to Romeo.

romeo juliet tybalt benvolio nurse iago friar mercutio aaron roderigo
1 0.78 0.72 0.65 0.64 0.63 0.61 0.6 0.6 0.59

Obviously Romeo is most like Romeo, but after that comes the rest of the crew in the play. As this text is somewhat raw, it is likely due to names associated with lines in the play. As such, one may want to narrow the window13. Let’s try love.

x
love 1.00
that 0.80
did 0.72
not 0.72
in 0.72
her 0.72
but 0.71
so 0.71
know 0.71
do 0.70

The issue here is that love is so commonly used in Shakespeare, it’s most like other very common words. What if we take Romeo, subtract his friend Mercutio, and add Nurse? This is similar to the analogy example we had at the start.

x
nurse 0.87
juliet 0.72
romeo 0.70

It looks like we get Juliet as the most likely word (after the ones we actually used), just as we might have expected. Again, we can think of this as Romeo is to Mercutio as Juliet is to the Nurse. Let’s try another like that.

x
cleopatra 0.81
romeo 0.70
antony 0.70

One can play with stuff like this all day. For example, you may find that a Romeo without love is a Tybalt!

Wikipedia

The following shows the code for analyzing text from Wikipedia, and comes directly from the text2vec vignette. Note that this is a relatively large amount of text (100MB), and so will take notably longer to process.

Let’s try our Berlin example.

    paris    berlin    munich   germany        at 
0.7575511 0.7560328 0.6721202 0.6559778 0.6519383 

Success! Now let’s try the queen example.

     king       son alexander     henry     queen 
0.8831932 0.7575572 0.7042561 0.6769456 0.6755054 

Not so much, though it is still a top result. Results are of course highly dependent upon the data and settings you choose, so keep the context in mind when trying this out.

Now that words are vectors, we can use them in any model we want, for example, to predict sentimentality. Furthermore, extensions have been made to deal with sentences, paragraphs, and even lda2vec! In any event, hopefully you have some idea of what word embeddings are and can do for you, and have added another tool to your text analysis toolbox.


  1. You can imagine how it might be difficult to deal with the English language, which might be something on the order of 1 million words.

  2. Simply taking the average of the word vector representations within a sentence to represent the sentence as a vector is surprisingly performant.

  3. It’s also used in the Shakespeare Start to Finish section.

  4. With a window of 5, Romeo’s top 10 includes others like Troilus and Cressida.