## On the differences between CPU and GPU or why we cannot use GPU for everything

This is written in response to a Quora question. It is a somewhat vague question wondering why we cannot use GPU hardware for all computation tasks. Feel free to vote there for my answer on Quora!

CPU and GPU are fundamentally very different computational devices, but not many people realize it. CPU has a few low-latency cores, elaborate large caches and flow control (prefetching, branch prediction, etc) and a large relatively inexpensive RAM. GPU is a massively parallel device, which uses an expensive high-throughput memory. GPU memory is optimized for throughput, but not necessarily for latency.

Each GPU core is slow, but there can be thousands of them. When a GPU starts thousands of threads, each thread knows its “number” and uses this number to figure out which part of the “puzzle” it needs to solve (by loading and storing corresponding areas of memory). For example, to carry out a scalar product between two vectors, it is fine to start a GPU thread to multiply just two vector elements. However, it is quite unusual from the perspective of a software developer who has been programming CPUs all their life.

GPU designers make a number of trade-offs that are very different from the CPU trade-offs (in terms of flow control, cache size, and management, etc), which are particularly well suited for parallelizable tasks. However, it does not make GPU universally faster than CPU. GPU works well for massively parallel tasks such as matrix multiplication, but it can be quite inefficient for tasks where massive parallelization is impossible or difficult.

Given a large number of “data-hungry” cores, it is IMHO more important (than in the case of the CPU) to have a high-bandwidth memory (but higher memory latency can be tolerated). Yet, due to a high cost of the GPU memory, its amount is limited. Thus, GPU often relies on external lower-bandwidth memory (such as CPU RAM) to fetch data. If we did not have CPU memory, loading data directly from the disk (even from an SSD disk) would have slowed down many GPU workloads quite substantially. In some cases, this problem can be resolved by connecting GPUs using a fast interconnect (NVLink, Infiniband), but it comes with an extra cost and does not resolve all the issues related to having only very limited memory.

Some answers claim that all GPU cores can do only the same thing, but it is only partially correct. However, cores in the same group (warp) do operate in a lock-step. To process a branch operation, GPU needs to stop some of the cores in the warp and restart them when the branch finishes. Different warps can operate independently (e.g., execute different CUDA kernels).

Furthermore, GPU cores are simpler than CPU cores primarily in terms of the flow control. Yet, they are not primitive by far and support a wide range of arithmetic operations (including lower-precision fast operations). Unlike CPU that manages its caches automatically, GPU have fast shared memory, which is managed explicitly by a software developer (there is also a small L1 cache). Shared memory is essentially a manually-managed cache.

Note that not all GPUs support recursive calls (those that support seem to be pretty restrictive about the recursion depth) and none of the GPUs that I know support virtual memory. In particular, the current CUDA recursion depth seems to be 24. GPUs do not have interrupts and lack support for communication with external IO devices. All these limitations make it difficult or impossible to use GPU as the main processing unit that can run an operating system (See also the following paper for more details: GPUfs: the case for operating system services on GPUs. M Silberstein, B Ford, E Witch, 2014.) I am convinced that future computation systems are going to be hybrid systems that combine low-latency very generic processing units and high-throughput specialized units suitable for massively parallel tasks.

## MNIST is super easy and few people know it!

One can never be too surprised by the phenomenal success of the MNIST dataset, which is used in so many image publications. But do people realize how easy this dataset is? One clear measure of hardness is performance of a simplistic k-NN classifier with vanilla L2 metric directly on pixels. As a variant: performance of the k-NN classifier with some basic unsupervised transformations such as the principal component analysis (PCA) or denoising.

I created a small poll to assess what people think about MNIST's k-NN search accuracy. I thank everybody for participation: Fortunately, more than one hundred people responded (most of them are machine learning practitioners and enthusiasts I assume). So, I think the results are rather reliable.

In summary, nearly 40% of the respondents think that the accuracy would be at most 80%, 45% think the accuracy is 95%. Unfortunately, I did not create the option for 90%. I think it would have had quite a few responses as well. That said the vanilla k-NN search on pixels has 97% accuracy and the combination of the PCA and the k-NN classifier has nearly 98% accuracy (here is a notebook to back up 98% claim.). In fact, with a bit of additional pre-processing such as deskewing and denoising, one can get a nearly 99% accuracy.

Turns out that few people realize how effective the k-NN classifier is on MNIST: only 17% voted for 98%. That said, it does not mean that the k-NN classifier is such a good method overall (it can be good for tabular data, see, e.g., this paper by Shlomo Geva, but not for complex image data, check, e.g., out numbers for CIFAR and IMAGENET). It means, however, that MNIST is very easy. Understandably, people need some toy dataset to play and quickly get results with. One better alternative is the fashion MNIST. However, it is not too hard either. A vanilla k-NN classifier has about 85% accuracy and it is probably possible to push the accuracy close to 90% with a bit of preprocessing. Thus, we may need a comparably small, but much more difficult dataset to replace both of them.

## Hello precision my old friend!

PREAMBLE:When dealing with retrieval, I have been traditionally using TREC NIST evaluation tools (trec_eval and gdeval) for information retrieval. Despite these tools are old, there has been a good amount of effort invested into making them right. Unfortunately, you have to call them as an external tool. Your program forks and runs out of memory. Despite Linux fork is lazy and does not really copy memory, it still happens. It happens even if you use the posix_spawn function and Python's spawn-type creation of new processes: multiprocessing.set_start_method('spawn')

The issue: I decided to switch to scikit-learn or a similarly-interface code (e.g., MatchZoo classes) to compute the IR metrics. I cross-compared results and I have come to the conclusion that very likely all scikit-learn-like packages are fundamentally broken when it comes to computing the mean average precision (MAP) and the normalized discounted cumulative gain NDCG

To compute both of the metrics, one needs two things:

1. The list of relevant documents, where the relevance label can be binary or graded
2. The list of scored/ranked documents.

Ideally, an evaluation tool could ingest this data directly. However, sklearn and other libraries cut the corner by accepting two arrays: y_score and y_true. Effectively each document is paired with its relevance grade, see, e.g., scikit-learn MAP.

Unfortunately, such an evaluation ignores all relevant documents, which are not returned by the system. In that, both NDCG and MAP have a normalizing factor that depends on the number of relevant documents. For example, in my understanding, if your system finds only 10% of all relevant documents, the scikit-learn MAP would produce a 10x larger MAP score compared to NIST trec_eval (and the Wikipedia formula). NDCG is still affected by this issue but to a lesser degree, because scores for omitted relevant documents will be heavily discounted.

I have created the notebook to illustrate this issue using one-query example and the MAP metric. By the way, for some reason, scikit-learn refuses to compute NDCG on this data and fails with a weird error.

Related reading: MAP is often (but not always) a meaningless metric if you do intrinsic evaluation of the k-NN search.

## Accurate Lucene BM25 : Redux

About five-six years ago, I discovered that a default Lucene BM25 similarity was giving me sub-optimal results, apparently due to a lossy encoding of document lengths (which was a part of Lucene's efficiency trick). I found this when I reimplemented BM25 on my own, but without a lossy document encoding. On my data, the difference was about 10%, which was far from being a trifle. I have run a good number of experiments where this difference was present. It was clearly not a random fluke or mirage. I eventually created a benchmark and published a blog post. I even made some noise on the Lucene dev list and promised to submit a patch. However, this did not happen as I got busy and Lucene changed its internal API.

Recently I was fortunate enough to revisit this problem thanks to Chris Kamphuis, Arjen P. de Vries, and Jimmy Lin who took me aboard their "Which BM25 Do You Mean? A Large-Scale Reproducibility Study of Scoring Variants". They also did most of the work by testing several BM25 variants of which my accurate Lucene similarity was only a small piece. Somewhat surprisingly, the "often maligned approximation of document length" in Lucene performed nearly as well as the accurate similarity. Another result is that there are only very small differences among various BM25 implementations. I think it is an important finding on which I reflect in the very end of the post (please read that last paragraph).

Now, there are two voices in my head: one that "maligns the approximation of the document length" and another that says this approximation is ok. How should we reconcile the voices? Because the scope and the size of the paper did not permit a more thorough experimentation and description, I have carried an additional code analysis that has not been included into the paper. This analysis is below.

My original experiments were run with Lucene 6 (and earlier versions). Lucene 6 does not encode a document length directly. Instead, it approximates the inverse square root of the length. Thus, it introduces an approximation error for basically every possible document length! Lucene 7 supports the old scheme, but already introduces a new encoding of a document length, which stores small numbers (less than 25) exactly and retains four most significant binary digits for large numbers (see my test code), which is basically a variant of sign-free exponent-shifted quarter-precision format (additionally they count only the number of unique terms, which reduces the value of a document length that needs to be encoded). I think that this new approximation scheme is much more accurate .

Thus, I have to disagree a bit with somewhat optimistic conclusions of our paper that it does not matter which BM25 implementations to use. It seems to be true only for sufficiently careful implementations of BM25, including the recent Lucene's one. However, it is also clearly possible to screw up BM25 rather easily.

In conclusion, I would like to note that results of our paper should be treated in a broader context. There is somewhat anecdotal knowledge that various papers reported different effectiveness values for BM25 similarity on identical collections. Some people (including me) tended to think it was due to differences in BM25 implementations. However, the paper by Trotman et al showed that it was likely due to confounding factors such as the choice of lemmatization/stemming, tokenization, stopping, and data cleaning algorithms: Trotman, A., Puurula, A., & Burgess, B. (2014, November). Improvements to BM25 and language models examined. Clearly, our results support the conclusions made by Trotman et al.