log in | about 
 

My 'Aha!' moments with machine learning/data science

This post on aha moments related to statistical learning duplicates my Quora answer. Feel free to vote and comment there.

I had the same moment a couple of times. It is not the aha moment though, it is a duh moment. Machine learning, which is more appropriately called statistical learning, is so a rear-view window approach. It learns statistical patterns from data, but nothing else. Such a learning creates some sort of a lossy compressed representation of the "past". This compressed representation can be used to predict "the future" as long as the future has the same statistical patterns (as the past). As obvious as it may seem, a clear understanding of this fact helps greatly. In my opinion, this holds for at least the basic supervised learning.

Another duh-moment is that we have been using "machine learning" since the dawn of the civilization to explain natural phenomena. Scientists observed data and came up with some sort of rules to explain why one event follows another one. We clearly started with some basic logical rules (e.g., one can predict that it will be snowing tomorrow given how skies look like today) and progressed to more sophisticated ones that involved math.

Interestingly, human learning has essentially the same flaw as the so called machine learning: Human theories can overfit easily to the data. Given enough degrees of freedom, almost anything can be explained. Yet convoluted theories are rarely true. This is probably one reason why we prefer simple elegant ones: This is some sort of a regularizer that prevents theories from overfitting data.

One well known overfitting example is a Geocentric system, which did not quite agree with observations in the first place. However, it was fixed by introducing a complex scheme of how planets rotate. As a result, the theory predicted planet movements better than alternatives, in particular, better than the simpler Heliocentric system (which was also somewhat flawed in the beginning because it assumed a perfectly circular motion). Many more examples (sadly) arise in a social context, when people try to explain too much while knowing too little. Most of our beliefs and conspiracy theories are probably nothing more than overfitting.



There is (almost) no difference between the cosine similarity and the Euclidean distance when it comes to normalized word embeddings (or other vectors)

One well-studied NLP task is answering analogy questions. For example: to whom the word woman relates in a similar way as the word man relates to the word king. This is a well-researched topic: there have been a lot of progress made and a human-level performance was reported at least ten years ago[1]. There are also some recent results by Mikolov and colleagues, who used neural-network based word embeddings constructed in a non-supervised fashion [2]. More specifically, neural networks feed on huge amounts of text and spit out real-valued word vectors.

There are interesting linguistic regularities related to these vectors. For example, the vector of man minus the vector of king is approximately equal to the vector of woman minus the vector of king. Therefore, argue Mikolov and colleagues, the word queen may be the most similar word to the following vector: king minus man plus woman. This type of regularity holds for many pairs of analogous words. Therefore, analogy question can be answered by solving a maximization problem, which is equivalent to a kNN search[2]:

$$
\textbf{queen} = \mbox{argmax}_w\; \mbox{similarity}(w, \textbf{king}- \textbf{man} + \textbf{woman})
$$

The similarity among vectors is computed using the cosine similarity. Levy et al. 2004 introduced a multiplicative maximization objective 3COSMUL[3], which was shown to be better than the cosine similarity. However, Pennington et al. could not reproduce this finding on their data sets[4].

Usually word embeddings are normalized so that their Euclidean distance is equal to one. Then, the cosine similarity is equal to the dot product. What is, perhaps, more interesting here (and few people seem to realize this!) is that in this case the cosine similarity produces the same results as the Euclidean distance. The cosine similarity is not equal to the Euclidean distance, but it is obtained by a monotonic transformation, which is a decreasing function, of the Euclidean distance. For a trivial proof, please, refer to the Wikipedia. What does it mean? Obviously, when two vectors have the largest cosine similarity (i.e., they are nearest neighbors with respect to this similarity metric), the Euclidean distances between them is the smallest. However, as noted by Hamed Zamani, there may be a difference if similarity values are used by downstream applications.

My personal pet peeve in regard to the above-mentioned linguistic regularities is as follows: for many word embeddings, the vector closest to the vector king - man + woman is not a queen, but actually a king! In fact, there seems to be a trend to place words already appearing as a part of the analogy question before true answers! (For more examples of this phenomena, please, see slide 13 of my recent talk at the ML lunch.)

To summarize, indeed we can answer analogy questions by carrying out a knn-search in the space of word embeddings, but we have to ignore words already appearing in the question! BTW, this hack is rarely mentioned and in fact it is omitted from in the original Mikolov et al's NAACL paper [2]. It is also apparently missing in the paper on Glove embeddings [4] (though it is indeed mentioned by Levy et al. [3]). This cost me a couple of hours of scratching my head with subsequent reading the code of word2vec when I tried to obtain analogy-question answers myself.

  1. P. Turney. Human-level performance on word analogy questions by latent relational analysis. 2004.
  2. T. Mikolov, W.-t. Yih, and G. Zweig. Linguistic regularities in continuous space word representations. In HLT-NAACL, 2013
  3. Omer Levy, Yoav Goldberg, and Israel RamatGan. 2014. Linguistic regularities in sparse and explicit word representations. CoNLL-2014.
  4. Pennington, Jeffrey, Richard Socher, and Christopher D. Manning. "Glove: Global vectors for word representation." In EMNLP 2015


    Using Nearest-Neighbor Search in Machine Learning and Natural Language Processing

    We have been working on improving our Non-Metric Space Library (NMSLIB), a toolkit for searching in generic spaces. First we carried out a new evaluation for a reasonably diverse data set. The results recently appeared in PVLDB 15.

    Second, we participated in a public evaluation. The results confirmed that our implementations are quite competitive. More specifically, the small-world graph approach proposed by Malkov et al. fared very well against FLANN and kgraph.

    However, our work is far from being finished. We may now attempt to apply our toolkit to NLP problems. I summarized thoughts on this topic in a talk at ML lunch.

    Equally important we try to make our toolkit easier to use. Because we originally cared mostly about efficiency of experimentation and publishing, a few important features are missing. We are now trying to fill the gap.

    In addition, we plan to carry out more comprehensive evaluations that would allow us to better understand the problem at hand as well as to devise methods that work well for a broader class of non-metric spaces.



    There is no love between academy and industry!

    Before the keynote talk at Hawaii, the chair Volker Markl greeted the speaker Juan Loaiza (Senior Vice President of Systems Technology at Oracle) with a Hawaiian lei. A lei is traditionally accompanied with a kiss. The day before, a Turing award winner Mike Stonebraker was greeted in exactly this way. So, Juan Loaiza jokingly reminded that Volker was supposed to kiss him. After Volker ignored the comment, Juan summarized: there is no love between academy and industry.



    Anti-advice: Don't tune your BM25 coefficients!

    A short anti-advice on tuning: Please, never, ever tune the BM25 coefficients or coefficients of any other retrieval model. Just stick to the values that worked well for "other" data sets. Otherwise, your numbers may be improved by as much as 5%.



    Pages

    Subscribe to RSS - blogs