log in | contact | about 
 

How fast is UIMA subiterator function?

Assume that you need to identify part of speech tags (POS-tags) in a sentence: "My dog likes meat". You run a POS-tagger and get the following result (pronouns are green, nouns are blue, and verbs are red):

My dog likes meat

One can view such process as highlighting words using pens of different colors. We do not change the original text, just add additional information. Computerized processing of natural languages adopted this paper-and-pen model by introducing a concept of electronic annotations. An annotation is simply a record that "highlights" a span of characters. The annotation has a start, an end, a type, and a number of attributes. In our example, all annotations are of the same type, but their color attribute allows us to distinguish among different POS tags. In an NLP world everything is annotation!

Annotations are typically created by different annotation tools independently. Yet, we frequently need to know which annotations overlap. For example, sentence boundaries may be denoted by annotation of one type. Tokens and POS tags could be annotations of a different type. Figuring out which POS tags are assigned to tokens in a given sentence requires us to identify POS tags and tokens that overlap with the sentence (or rather tokens/tags that are contained in it) .

Probably, you do not want to invent a wheel and implement annotation processing using some off-the-shelf software. In particular, our group employs UIMA ECD, a framework on top of Apache UIMA. (For more details on UIMA-ECD, please, see a tutorial. BTW, UIMA-ECD is much easier beast to tame than pure UIMA). An overlap of annotations is computed using the function subiterator. (see also my recent comment on type priorities.)

How efficient is subiterator? In theory, UIMA uses indexes, but I could not find any comments on the efficiency of this operation in the official documentation. I tried to look at the source code, but it was hard for me to get through the maze of classes and abstract interfaces quickly. Anyways, even if UIMA uses some form of an index, how efficient is such an index? Being a bit paranoid, I decided to benchmark.

To this end, I created a simple pipeline. In this pipeline, I removed HTML from Wikipedia documents. Then, documents were annotated using the SENNA parser and OpenNLP. OpenNLP creates annotations for sentence boundaries, while the SENNA parser identifies POS tags (recall that sentence boundaries are denoted by annotations).

Finally, I iterated over sentences and for each sentence I retrieved related POS tags using two approaches. The first approach employed subiterator and was expected to be efficient (due to relying on indexes). In the second approach, I simply iterated over all POS tags in a document. This one would be slow, because a Wikipedia document has thousands of POS tags and hundreds of sentences. Thus, a nested loop (first over sentences, then over POS tags) could be expensive. My code is freely available.

Depending on a document, an average time to retrieve a POS tag using an index varied from 1 to 5 microseconds. In the bruteforce-iteration approach, which does not rely on index, a time to retrieve a POS tag varied from 0.1 to 0.5 millisecond, a two orders of magnitude difference.

Conclusions? A UIMA subiterator function is not terribly fast (1-15K CPU cycles per operation), but it is rather efficient. I would say it should be good enough for most tasks.


UPDATE1: This example will always work if a sentence span is strictly larger than the span of a contained annotation. If the spans can be of equal size, one needs to properly define type priorities. In that, the Sentence will need to have a higher priority.

UPDATE2: See also a follow-up post. There is a more efficient UIMAfit implementation of the subiterator function that also does not care about type priorities.



Unix pipes have small capacity

There is a catch in a previously described solution to wrapping NLP tools. Namely, a Unix pipe has a small buffer (I believe it is in the order of kilobytes). This is why you need to send input data in small chunks and read all the output after each chunk is processed. Otherwise, a deadlock can happen.

If you try to push a large chunk, a write call (that writes data to the output pipe) will be waiting till your application reads data from the other side of the output pipe. At the same time, your application will be trying to push input data through the input pipe. Because the pipe has a limited capacity, your application will exhaust this capacity and "freeze" in the write call. It will be waiting for the NLP tool to read the data from the other side of the input pipe. Yet, the NLP tool, in turn, will be waiting for your application!

As Di Wang pointed out, you can avoid this situation by using a temporary file and a pipe in a clever way. As in the naive and unsafe solution described in the beginning of the post, you will write the input data to a temporary file. Then, you will write the name of this temporary file to the input pipe. Because the name of the file is very small, you will not exceed the pipe capacity.



Plumbing with named pipes: a simple approach to wrap standalone NLP tools

A modus operandi of many natural language processing (NLP) tools is as follows: start a tool, read raw data from the standard input, write processed data (e.g., POS tags) to the standard output and finish. That is, a lot of NLP tools come without any server mode. At the same time the cost of starting a process can be substantial. Thus, if you have to supply raw data in small chunks, the processing can be very slow.

What is the easiest way to fix this, if we do not care about multi-threading? Writing a fully-fledged network daemon is possible, but is rather hard. Some folk default to modifying the NLP tool so that it reads input from a file as the input appears. A modus operandi is, then, as follows. Sit and wait till the input file is created. When it happens, read raw data and output processed data to some other file. It is an easy solution: synchronization can be problematic though: We may end up with one process reading from the file while another one is writing to it.

A much better way is to use a named pipe! Unix named pipes are almost identical to regular temporary pipes that we use to direct an output of one process to an input of another process (using the symbol |). The only difference is that named pipes are permanent. From a perspective of a Unix process, a named pipe is just a regular pipe from which you can read data (or write data to). A benefit of using a pipe is that the operating system takes care of synchronization: one process can safely read from the pipe while another process is writing to it.

To begin the plumbing, we need to create two named pipes, one for input, another for output:

mkfifo input_pipe
mkfifo output_pipe

Then, we need to modify our NLP tool.

1) We make the tool process data in an infinite loop (rather than exiting after processing all the input). In the beginning of the loop, we open the the output pipe for writing (and we open the input pipe only once, when the tool starts). After all data is processed, we close the output pipe. Note that closing and re-opening the output pipe is important. Otherwise, a receiving process will not read the EOF marker and consequently, will wait for ever.
2) We replace all operators that read raw data from the standard input with operators that read data from the input named pipe.
3) We replace all operators that write processed data to the standard output with operators that write data to the output named pipe.

In C/C++, this is straightforward (in other languages this is not hard either). For instance, we replace

  1. fgets(sentence, MAX_SENTENCE_SIZE, stdin)

with
  1. fgets(sentence, max_sent_size + 1, senna_input)

That is, pretty much it. As a working example, I publish the modified main C-file of the SENNA parser version 3.0.

Happy plumbing and let your pipes be functioning properly!

PS1: There is one catch: in a previously described solution to wrapping NLP tools. Namely, a Unix pipe has a small buffer (I believe it is in the order of kilobytes). This is why you need to send input data in small chunks and read all the output after each chunk is processed. Otherwise, a deadlock can happen.

If you try to push a large chunk, a write call (that writes data to the output pipe) will be waiting till your application reads data from the other side of the output pipe. At the same time, your application will be trying to push input data through the input pipe. Because the pipe has a limited capacity, your application will exhaust this capacity and "freeze" in the write call. It will be waiting for the NLP tool to read the data from the other side of the input pipe. Yet, the NLP tool, in turn, will be waiting for your application!

As Di Wang pointed out, you can avoid this situation by using a temporary file and a pipe in a clever way. As in the naive and unsafe solution described in the beginning of the post, you will write the input data to a temporary file. Then, you will write the name of this temporary file to the input pipe. Because the name of the file is very small, you will not exceed the pipe capacity.

PS2: In principle, even a simpler approach might be possible. And this approach would not require modification of the NLP tool. We can create a wrapper utility that would simulate the following Unix command:

printing_binary | ./nlp_tool_binary | recipient_binary

A trick is to implement this pipeline in such a way that printing_binary is the same as the recipient_binary. In C/C++ and Unix, it is possible, e.g., through a system call popen. One big issue here, though, is that we need to feed several input batches to nlp_tool_binary. However, the output from nlp_tool_binary can be a single stream of characters/bytes without separators that indicate batch boundaries.

Clearly, we need to pause after feeding every input batch until we retrieve all the respective output data. However, if we keep reading the output of nlp_tool_binary using a blocking read operation, we will eventually end up waiting forever. We can read using a non-blocking system call, but how long should we repeat this system call before giving up and declaring that nlp_tool_binary has processed all the data? It might be possible to use some knowledge about the output format of nlp_tool_binary, or simply stop trying after a certain time elapses. Yet, none of these solutions is sufficiently reliable or generic, so if anybody knows a better approach, please, let me know.



Training an enormous neural network at home

Ever had to train a complex model using limited resources available at home? Perhaps, you thought about using GPUs or something. Yet, even the largest state-of-the-art neural network, requiring as many as 16 expensive GPUs, has only 10 billion connections. At the same time, a small computational device called felis catus (FC) has up to 1013 synapses, which is a three orders of magnitude more complex system. This computational power is put to good use. For example, it permits solving motion-related differential equations in real time.

Deep neural networks demonstrate good performance in the task of speech recognition. Thus, it was quite natural to apply an FC to this problem. However, we decided to take it one step further and trained the FC to recognize visual clues in addition to voice signals. It was a challenging task due to FC's proclivity to overtrain as well as lack of theoretical guarantees for convergence. The model has been slowly converging for more than a year, but this was worthwhile. We achieved an almost perfect recognition rate and our results are statistically robust. A demo is available online.

This post is co-authored with Anna Belova.



A small clarification on the LSH post

This is a clarification to the previous post. Note that I am not claiming that the LSH analysis is wrong! I am saying that apparently the analysis uses a simplifying assumption. And this assumption was swept under the carpet for many years (which is quite surprising). See also a discussion in the blog of Daniel Lemire.

Yet, such approaches are quite common in CS, statistics, physics, etc... As Daniel Lemire pointed out, an analysis of regular hash tables is essentially based on a very similar assumption: that a hash function distributes elements more or less randomly. I believe that in practice this is true only to a certain a degree (and this highly depends on a hash function). Yet, we still use the analysis. So, it is probably not a big deal.

I have recently come across another (actually well-known paper), where they talk at length about the assumptions being reasonable, but make a simple math error (or at least me and my co-author think so). So, it is infinitely better to rely (on perhaps silent) assumptions, but to get the math right, then to substantiate the assumptions and totally screw the math.



Pages

Subscribe to RSS - blogs