log in | about 
 

Benefits of GRUs over LSTMs

This is written in response to a Quora question, which asks about the benefits of GRU over LSTMs. Feel free to vote there for my answer on Quora!

The primary advantage is the speed of training and inference: GRU has two gates instead of three (and fewer parameters). However, a simpler design comes at the expense of inferior capabilities (in theory). There is a paper arguing that LSTMs can count, but GRU can not.

The loss of computational expressivity may not matter much in practice. In fact, there is recent work showing that a trimmed, single-gate LSTM can be quite effective in practice: "The unreasonable effectiveness of the forget gate" by Westhuizen and Lasenby, 2018.



What are some direct implications of Wittgenstein’s work on natural language processing?

This is written in response to a Quora question, which asks about direct implications of Wittgenstein’s work on natural language processing. Feel free to vote there for my answer on Quora!

How could Wittgenstein have influenced modern NLP? Yorick Wilks cited by the question asker hints at three possible aspects:

  1. Distributional semantics
  2. Symbolic representations and computations
  3. Empiricism

Wittgenstein likely played an important role in the establishment of distributional semantics. We mostly cite Firth’s famous "You shall know a word by the company it keeps", but this was preceded by Wittgenstein’s "For a large class of cases—though not for all—in which we employ the word ‘meaning’ it can be defined thus: the meaning of a word is its use in the language." This formulation was given in his “Philosophical Investigations”, published posthumously in 1951, but he started to champion this idea as early as 1930s. It likely influenced later thinkers and possibly even Firth.

Let’s move onto the symbolic representations. In his earlier work Wittgenstein postulates that the world is a totality of facts, i.e., logical propositions (which is called logical atomism). It is not totally clear what could be the practical consequences of this statement (should it be implemented as an NLP paradigm). In addition, Wittgenstein rejected logical atomism later in life. He also declared that it is not possible/productive to define words by mental representations or references to real objects: Instead, one should focus exclusively on the word use. This sounds very "anti-ontology" to me.

Last, but not least, modern NLP has a statistical foundation. However, Wittgenstein never advocated an empirical approach to language understanding. I have found evidence that he dismissed weak empiricism.



My $0.05 on the street value of pre-trained embeddings

This is written in response to a Quora question, which asks about the street value of pre-trained models. Feel free to vote there for my answer! .

This is an interesting question. There’s clearly no definitive answer to it. My personal impression (partly based on my own experience) that with a reasonable amount of training data, pre-training and/or data augmentation is not especially useful (if at all). In particular:

  1. In a recent paper by Facebook, this is demonstrated for an image-detection/segmentation task: Re-thinking ImageNet pre-training. He et al. 2018.
  2. A couple of recent chilling results:
    1. Researchers from Google and Carnegie Mellon university showed that a 300x (!) increase in the number of training examples only modestly improves performance. I think it is an especially interesting result, because the data is only weakly supervised (i.e., it is the most realistic big-data scenario).
    2. Unsupervised training does not work yet for truly low-resource languages: Two New Evaluation Data-Sets for Low-Resource Machine Translation: Nepali–English and Sinhala–English. Guzman et al 2018.
  3. Here is one example from the speech-recognition domain: Exploring architectures, data and units for streaming end-to-end speech recognition with RNN-transducer. Rao et al 2018, where pre-training works, but gains are rather modest: "We find CTC pre-training to be helpful improving WER 13.9%→13.2% for voice-search and 8.4%→8.0% for voice-dictation".

Thus, if you are interested in obtaining SOTA results on the dataset of interest, you may need to be very clever and efficient in obtaining tons of training data. That said, pre-training certainly allows one to achieve better results in many cases, especially when the amount of training data is small. This can be really useful for bootstrapping. See, e.g., the following radiology paper: Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning. Shin Hoo-Chang et al. 2016.

That said, AI is a fast-developing field, and we see particularly impressive advances in transfer learning for NLP. This series largely started with the following great papers (particularly from the Allen AI ELMO paper):

  1. Semi-supervised Sequence Learning. Andrew M. Dai, Q. Le. 2015.
  2. ELMO paper: Deep contextualized word representations. Peters et al 2018.

Recently, we have seen quite a few improvements on this with papers from Open AI (GPT), Google AI (BERT), and Microsoft (I think it’s called Big Bird, but I am a bit uncertain). These improvements are huge and very encouraging. Let us not forget that the road to these successes have been paved by two seminal papers, which largely started the neural NLP:

  1. Natural language processing (almost) from scratch. R Collobert, J Weston, L Bottou, M Karlen. 2011.
  2. Distributed representations of words and phrases and their compositionality. 2013. T Mikolov, I Sutskever, K Chen, GS Corrado, J Dean.

Both papers proposed its own variant of neural word embeddings, learned in an unsupervised fashion. This was clearly a demonstration of the street value of a pre-trained model in NLP. Furthermore, the first paper, which was a bit ahead of the time, went much further and presented possibly the first suit of core neural NLP tools (for POS tagging, named entity recognition, and parsing). It is worth mentioning that there are also earlier and less-known papers on neural NLP including (but not limited to) a seminal neural language modeling paper by Y. Benigo.

In conclusion, I would also note that a lot of pre-training has been done in the supervised fashion. Perhaps, this is a limiting factor as the amount of supervised data is relatively small. We may be seeing this changing with more effective unsupervised pre-training methods. This has become quite obvious in the NLP domain. However, there is a positive trend in the image community too. For example, in this recent tutorial (scroll to the unsupervised tutorial pre-training), there is a couple of links to recent unsupervised training approaches that rival ImageNet pre-training.

Some further reading: A good overview of the transfer learning is given by S. Ruder.



Soviet era joke, which is still relevant today

To understand this Soviet-era joke you need to know two things:

  1. In an ideologically-driven society it is dangerous not to agree with the official/dominant point of view planted by autocratic country rulers.

  2. Planting requires an incessant flood of propaganda making us think that we are doing much better than we actually do.

Now the story:

A state propaganda agent came to give a presentation in a mental facility. He presents in front of a crowd of mentally ill people and tells them how well the economy does, how unbelievably fast it grows, and how much better life will be in the near future.

There is a standing ovation, but one man abstains from participation.

— Why don’t you applaud: asks the official.

— I am not mentally ill, I am a nurse here.

PS: There is a, possibly, obvious, but not-so-funny side of this joke: It remains relevant today in a variety of domains (well beyond politics).



Will natural language processing engineers find it hard to get work in the future (once computers are capable of near-perfect text and speech processing)?

This is written in response to a Quora question, which asks if most NLP engineers will be out of jobs once computers are capable of near-perfect text and speech processing. Feel free to vote there for my answer on Quora!

So, will NLP be engineers be out of jobs? Yes, absolutely! Future is highly uncertain. As believed by Marvin Minsky, an imperfect human race will create a new robotic race that will not suffer from human limitations and which will inherit the Earth and surrounding planets. As humans have outcompeted and replaced many other species, these robots will outcompete and replace humans. We should not, however, fear the future but fulfill our evolutionary destination and welcome our robot overlords.

As impressive as they are, existing AI systems only seem to be intelligent. We do not know how many dozens, hundreds, and, possibly, thousands of years it will take to create truly intelligent machines. In fact, we do not truly know what it means to be intelligent and what is required to be intelligent. A recent high-profile paper: “Building Machines That Learn and Think Like People”, 2016 by Lake et al., tries to find some answers, but its conclusions are far from being definitive.

Ray Kurzweil famously predicted singularity to happen in 2045 based on the exponential growth of computational capacity. However, the size of the transistor is already about 100x the size of an atom. My guess would be that the current technology has a potential of a 10x increase in capacity. It also seems that there is no production-ready immediate replacement on the horizon. In particular, it is not clear when (and if) 3-d chips will be available.

At the same time, the best GPUs have about 20 billion transistors, while the human brain has 100 billion neurons each of which has 10K connections (synapses) on average. How many transistors are necessary to create an artificial neuron? One of the most advanced custom neural chips TrueNorth implements one million spiking neurons and 256 million synapses on a chip with 5.5 billion transistors with a typical power draw of 70 milliwatts.

Thus, it takes about 20 transistors per synapse. Even if we assume that an artificial neuron is as powerful as the real one (which is likely very far from truth), the current technology is six freaking orders of magnitude behind a human brain! Size notwithstanding, power consumption is also an enormous challenge. According to the above cited report, if TrueNorth is scaled up to the size of the human brain it would require 10,000 times more energy!

Furthermore, it is highly unrealistic to assume that an artificial neuron is nearly as complex as a real one. For example, the following book argues that even a single-cell organism (albeit a rather large one) can exhibit extremely complex behaviors, which include sensing and hunting: Wetware: A Computer in Every Living Cell: Dennis Bray. C elegans has about 500 neural cells, but it has basic sensory system and muscle control! It can reproduce and mate.

As my co-author and friend Daniel Lemire noted, our planes do not fly like birds and submarines do not swim like fishes. We do not have to mimic human brain to solve artificial intelligence tasks. We may not even need a brain-like structure to create a truly thinking machine. However, I would argue that we—using the phrase of the Turing award winner Richard Hamming—simply do not have an attack, i.e., a reasonable way to approach this difficult problem.

Another good observation from Daniel Lemire is that we should expect the unexpected because experts can be easily wrong. For example, there were some predictions about impossibility of flight in early 20th century. Although we should expect breakthroughs anytime, I do not think that impossibility of flight for heavier than air machines is a good example. The first gliders appeared well before the first propelled planes. In fact, some people had very clear ideas about how planes should and could fly. This is not true for the general artificial intelligence and we have not built the first gliders yet.

Traveling to the stars is clearly a difficult problem. Few people would argue with that. However, for some reason everybody thinks that artificial intelligence is something that is just a few (dozens) years away. Well, it could be so. But it could also be harder than interstellar travel.

Even if we can create a human-size neural network, we do not know how to program it efficiently. A state-of-the-art approach to training a model consists in collecting a huge amount of data and making a neural network that finds a mapping from inputs to outputs. This approach truly revolutionizes speech and vision and improves to some degree text processing. However, it might be just a gigantic “fuzzy” memory.

This approach is also incredibly brittle and data greedy. We do not know if we can scale it from hundreds to millions of layers. There are a number of recent papers showing that it is very easy to “poison” training data. For example, in the IMDB sentiment dataset the error rate can be driven from 12% to 23% by adding only 3% poisoned data. State-of-the-art CNNs fail (accuracy drops from 90+% to 10-%) for color modified CIFAR-10 images that are easily classified by humans.

Another big success of neural networks is speech recognition. Perhaps, it is the biggest success so far. For clean speech we can get near human recognition rates. However, on noisy data and especially when multiple speakers are present (an infamous cocktail party setting) the results are quite subhuman. The cocktail party setting is especially bad. It is a big success if you can reduce the word error rate from 90% down to 50% or to 30% (i.e., a computer misses every second or third word).

One clear issue with the current approaches is that clean training data can be quite expensive to obtain. For the existing not-so-clean data (collected in a semi-supervised fashion), there can be only little benefit by scaling (an already huge) training set by further two (!) orders of magnitude.

For example, a recent work by researchers from Google and Carnegie Mellon university has showed that a 300x (!) increase in the number of training examples only modestly improves performance. There is a lot of hope that reinforcement learning will solve these issues, but it does not seem to work yet.

All in all, judging by a good number of publications and blog posts that I have been reading in the last six years, we can now do well in a number of constrained domains. However, the success depends mostly on the existence of human-created training data and tons of engineering effort. In that, I suspect that the success of end-to-end systems (i.e., no engineering effort to modularize the problem and synthesize a system from multiple sometimes handcrafted models) is still limited.

Extending existing techniques to new domains requires many years of work from skilled engineers and scientists. I do not see how this can change in the near future. I actually expect that we will need many more scientists and engineers to continue making good progress. Brace yourself, it looks there is megatons of work ahead.



Pages

Subscribe to RSS - srchvrs's blog