log in | about 
 

Early life of dynamic programming (Concluding part)

Eight years ago I started my blog with a post on the origins of dynamic programming. Therein, I argue that the term programming stems from a military definition of the word "program", which simply means planning and logistics. In mathematics, this term was adopted to denote optimization problems and gave rise to several names such as integer, convex, non-linear, and differentiable programming. I promised to describe how dynamic programming had a somewhat rocky start in computational biology in a follow-up post, but never delivered on this promise.

It has been a decade of phenomenal success of another programming concept, namely the differential programming. Three neural networks pioneers: Geoffrey Hinton, Yoshua Bengio, and Yann LeCun (with a regretful omission of Jürgen Schmidhuber) received a Turing award for "For conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing." Now it seems to be perfect time to deliver on the promise and wrap up with historical dynamic programming posts.

As I mentioned in my first blog post, dynamic programming is a relatively simple way to solve complex problems through an elegant and efficient recursion. In particular, it is at the heart of evolutionary distances in computational biology and the Levenshtein (also known as an edit) distance in natural language processing. Different as they are, these fields rely on string comparison via variants of an edit distance. The simplest way to compute an unweighted edit distance between strings $a=a_1 a_2 \ldots a_n$ and $b = b_1 b_2 \ldots b_m$ is through a following simple recursion:

$$
d_{i+1,j+1} = \min \left\{
\begin{array}{c}
d_{i,j+1} + 1 \\
d_{i+1,j} + 1 \\
d_{i,j} + (a_{i+1} \ne b_{j+1})\\
\end{array}
\right.
$$

The computational cost is quadratic. Although there are faster average-case algorithms, there is little hope that the worst-case time is strongly sub-quadratic.

The formula is simple, but not always immediately obvious. In fact, it took the scientific community almost ten years to fully realize that essentially the same approach provides a solution for several fields: information retrieval, bioinformatics, and speech recognition (see my survey for a list of early works). Furthermore, as I explain below, a renowned mathematician Stanislaw Ulam not only failed to discover the formula, but also failed to recognize the solution when it was presented to him by David Sankoff. Next time when you fail to solve an apparently "simple" dynamic programming puzzle, do not beat yourself up!

The edit distance was first published by Levenshtein (and is often called the Levenshtein distance) in the context of self-correcting binary codes (Levenshtein, Vladimir I. "Binary codes capable of correcting deletions, insertions, and reversals." Soviet physics doklady. Vol. 10. No. 8. 1966.). Imagine a noisy channel that modifies binary messages by changing, inserting, and deleting bits occasionally. If such spurious changes are not very frequent, it may still be possible to recover the original message. This is clearly possible but only if the code of one symbol is sufficiently different from other symbol codes (i.e., when the distance is large enough).

Consider an example, where the set of codes contains only two 3-bit codes 000 and 111 and the channel cannot modify more than one bit. Then, a noisy version of 000 will always be different from the noisy version of the code 111. Indeed, the noisy version of 000 can have at most one bit equal to one, while the noisy version of 111 always has at least two bits equal to one. By counting the number of unit bits, we can always recover the original code.

Clearly, there is a lot of waste here, because we use only two values out of eight possible ones. However, this is the price we pay to separate the noisy variants of the codes. Levenstein himself was only interested in estimating the amount of waste that is necessary to make the garbled codes separable. He actually proved that the waste would not be that bad: It is possible to use about $2^n/n$ separable code words of length n (bits).

Despite Levenshtein paper is massively cited, I think few people read it, in particular, because it was not possible to access this paper online. For my survey on approximate dictionary searching, I had to visit the actual Russian library to read the paper. It was a bit complicated, because at that time I already resided in the US.

Levenstein apparently did not realize that the distance he introduced could be useful in text processing applications, such as spell-checking, speech recognition, and alignment of DNA (or protein) sequences. In the 60s, the bioinformatics was in its early stage. The structure of the DNA was discovered in 1953, but the first complete gene was not decoded until 1972! Nevertheless, there was apparently a lot of interest in the problem of sequencing and finding similar regions among sequences of different species (note, however, this is not the only goal of finding similar sequences).

The usefulness of the latter method is based on the assumption that similarities in genetic sequences represent common ancestry. Two species start from a common genome and then follow different evolutionary paths, which results in changing certain areas of the original DNA. Yet, most subsequences of the DNA remain very similar. In a simplified model, the Nature "edits" a sequence by randomly deleting, inserting, or modifying certain small areas of the genes. These changes happen with a certain rate: The longer is a time frame, the more changes happen. By measuring the volume of the differences between two species (using some form of the edit distance) we can estimate the time required to evolve from a single common ancestor. (See, e.g., Nei, M., & Zhang, J. (2006). Evolutionary Distance: Estimation for more details.)

Stanislaw Ulam, a famous mathematician who played one of the pivotal roles in the Manhattan project, is often credited with the invention of the evolutionary distance. However, as argued by David Sankoff, he failed to realize that the distance can be computed by a simple dynamic programming algorithm. Turns out that dynamic programming was not so simple after all.



Adversarial AI vs Evil AI in layman's terms

This is written in response to a Quora question asking to explain in layman's terms the difference between adversarial and evil AI. Feel free to vote on my answer on Quora.

This is an excellent question! For starters, in my opinion, the current AI heavily relies on statistical learning methods, which are rather basic. For this reason it is nowhere near producing sufficiently intelligent machines, let alone machines that can have feelings, emotions, free will etc. There are algorithmic and hardware limitations that I cover in my blog post (also available as a Quora answer).

Modern AI cannot be evil in a traditional sense human sense of the word, however, it can can cause a lot of harm as any other immature technology. For example, despite the famous claim by a Turing award winner G. Hinton that we would have to stop training radiologists roughly today, there is mounting evidence that deep learning methods for image analysis do not always work well.

Furthermore, statistical methods (aka AI) are becoming ubiquitous tools of decision making (money lending, job search, and even jailing people). However, statistical learning methods are not fair and can be biased against certain groups of people. From this perspective AI can be considered evil. Of course, humans are biased too, but human opinions are diverse and we, humans, tend to improve. Having a single black-box uncontrollable decision algorithm that becomes more and more biased is a scary perspective.

Modern AI is not reliable and immature: It works only in very constrained environments. Why is that? Because statistical learning is a rear-mirror-view approach that makes future decision based on patterns observed in the past (aka training data). Once the actual (test) data diverges from training data in terms of the statistical properties, performance of modern AI decreases quite sharply.

In fact, it is possible to tweak the data slightly to decrease the performance of an AI system. This is called an adversarial attack. For example, there is research showing that addition of distractor phrases does not confuse humans much, but it completely “destroys” performance of a natural language understanding system. For the reference, the modern history of adversarial examples started from the famous paper by Szegedy et al 2013. They showed that small image perturbations, which are too small to be noticed by humans, completely confuse deep neural networks.

In summary, the adversarial AI has nothing to do with the evil AI. It concerns primarily with devising methods to fool modern statistical learning methods with (adversarial) examples as well as with methods to defend against such attacks. Clearly, we want models that can withstand adversarial attacks. This is a difficult objective and a lot of researchers specialize in the so called adversarial AI.



On the worthiness of PhD

This is written in response to a Quora question, which asks whether a PhD was worth it. Feel free to vote there for my answer on Quora!

A bit of a background: after about 10 years working on various database, infrastructure, and web projects, I decided to get more research experience. You can consider it a mid-career change with a caveat that I did not completely change the field (software development and computer science), but rather transitioned to work on more exciting problems.

I do not think it was worthwhile financially, but I have never thought it would be. All in all, I think I might eventually break even. Was it worthwhile otherwise? Have I achieved my goals?

First of all, I started working on problems which I previously had little chance to work on. Before joining the program, I was working a bit on information retrieval (IR) applications and infrastructure. If you are generic web/database developer, you will likely be pigeonholed into one of these exciting positions for the rest of your life. Now, I have projects in speech recognition, NLP, and, IR. In particular, I believe this work can improve doctors lives and prevent their burnout.

Second, I believe my PhD studies was a beginning of a mind expansion journey that I do not intended to finish (till death do us part).

Third, because I worked hard to get a degree from a recognized institution, I get quite a bit of attention from recruiters.

Have I achieved all my goals? The answer is no and it is still a work in progress. I have become an applied scientist, but I am still quite interested in working on more fundamental problems.

I am generally satisfied with my PhD studies. I believe it did open some new doors. However, consider the following:

  1. I am in the booming field of computer science. Furthermore, I am in the booming sub-field of speech and language processing.
  2. Although I have not become famous, two of the research libraries I co-authored have become reasonably well-known and my papers get some citations.
  3. I have obtained my PhD from a recognized institution a bit faster than my department average (by design a US PhD is supposed to take about six years). Doing so was not a walk in the park! I know people who got stuck for 10 years.
  4. That my original background was applied math and software engineering (combined with substantial real-world experience) was certainly quite helpful in achieving this goal.
  5. In that, I am still not quite sure what I am going to do in the near future.

Given an about 50% failure rate on a way to get a PhD degree, the potential sleep deprivation, the burnout, loss of interest to research, and other possibly unlucky circumstances, I can easily imagine how getting a PhD can be an extremely frustrating experience in terms of morale, finance, and health.



Robert Mercer's contribution to the development of machine translation technologies

This is written in response to a Quora question, which asks about Robert Mercer's contribution to the development of machine translation technologies. Feel free to vote there for my answer on Quora!

Robert Mercer (Peter Brown and a few other folks) played a pivotal and crucial role in the creation of the first modern translation models. They were able to create the first modern large scale noisy-channel translation system and publish the first paper on the subject. They created a series of IBM Model X models and spearheaded a new research direction (which is huge nowadays).

Recently Robert achieved an ACL lifetime achievement award for his pioneering work on machine translation. He was recently interviewed on the topic and there is a nice transcript of the story that uncovers a lot of historical details: Twenty Years of Bitext.



How do we make the architecture more efficient for machine learning systems, such as TensorFlow, without just adding more CPUs, GPUs, or ASCIs?

This is written in response to a Quora question, which asks about improving the efficiency of machine learning models without increasing hardware capacity. Feel free to vote there for my answer on Quora!

Efficiency in machine learning in general and deep learning in particular is a huge topic. Depending on what is the goal, different tricks can be applied.

  1. If the model is too large, or you have an ensemble, you can train a much smaller student model that mimics behavior of a large model. You can train to predict directly the probability distribution (for classification). The classic paper: "Distilling the Knowledge in a Neural Network" by Hinton et al., 2015.

  2. Use a simpler model and/or smaller model, which parallelizes well. For example, one reason transformer neural models are effective is that they are easier/faster to train compared to LSTMs.

  3. If the model does not fit into memory, you can train it using mixed precision: "Mixed precision training" by Narang et al 2018.

  4. Another trick, which comes at the expense of run-time, consists in discarding some of the tensors during training and recomputing them when necessary: "Low-Memory Neural Network Training: A Technical Report" Sohoni et al, 2019. There is a Google library for this: "Introducing GPipe, an Open Source Library for Efficiently Training Large-scale Neural Network Models."

  5. There is a tons of work on quantization (see, e.g., Fixed Point Quantization of Deep Convolutional Networks" by Lin et al 2016) and pruning of neural networks ("The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks" by Frankle and Carbin.) I do not remember a reference, but it is possible to train quantized models directly so that they use less memory.



Pages

Subscribe to RSS - blogs