log in | about 

"Not everything warrants an efficient implementation" is an old maxim. Yet, an explosion in computing power never stops to amaze me. For example, recently I needed to deal with a text file where sentences were divided into several classes (say 200). The file itself contained 2-3K lines. I had a loop over class identifiers. In each iteration, I got a class id x and had to retrieve sentences belonging to this class x from the file.

I had a technical problem that prevented me from parsing the file once and storing results in, say, a hash map. Instead, in each iteration I had to read the whole file, parse it, and keep only the sentences related to the current class id x. This was a horrible solution with a potentially quadratic runtime, right?

It was horrible and in the beginning, I was worried a bit about efficiency of this approach. One wouldn't do it on old i386 (or worse) machine. However, when I tested this solution on a modern core i7 laptop, it turned out that re-reading and re-parsing of the file took only 0.02 sec in Java. Other components were much slower and I could have in principle afforded to deal a 10x larger file that had 10x unique classes (compared to the current 2.5K file with less than 200 groups).