log in | about 
 

This was prompted by several recent posts, in particular, by Zach Lipton's tweet, where he complained that all ML culture has been revolving around hacking: "The dominant practice of the applied machine learnist has shifted from ad-hoc feature hacking (2000s) to ad-hoc architecture hacking (2010s) to ad-hoc pre-training hacking (2020s)."

This may seem to be just another (relatively innocent) complaint about the lack of rigor and scholarship in the machine learning field. However, in my opinion, it represents a much bigger issue, namely, a divide between theorists and experimentalists; between tinkerers and scholars. There are very different opinions on both sides of this divide. For example, my friend and co-author Daniel Lemire goes rather far by saying that scholarship is conservative while tinkering is progressive. On the other side, we have people eager to label tinkerers and experimentalists as tech bros or merely engineers.

I do not subscribe to any of these extremes. However, I believe that tinkering has been an essential, if not the primary engine of progress. There is an understandable desire to explain things, which amounts to building a theoretical model of the world. This is clearly super-useful, but there are limitations. First of all, theories are not bullet-proof. They aim to explain experimental data and they evolve over time. One example is a "contest" between Geo- vs Heliocentric systems. At some point, the Geocentric system was better supported by data, despite being wrong (in the modern understanding). Somewhat similarly, we had to amend Newton's physics and we will probably have to make many amendments to existing theoretical models as well.

Second, theories are limited, often to a great extent. One has to make assumptions (which never truly hold in practice) as well as a lot of simplifications. One of my most favorite examples is a theory of locality-sensitive hashing. Another example is parsing in natural language processing (NLP). Parsing is a crucial component of a rule-based (and hybrid) NLP. There was a lot of effort devoted to making parsing effective, in particular, by training (deep) neural networks models to do parsing. Despite being improved by deep learning, parsing is not particularly popular nowadays. One problem, in my opinion, is that linguistic theory behind parsing permits explaining only a limited number of language phenomena. Thus, these theories (so far) have been more useful to debugging existing neural networks rather than for building fully-functional applications such as question-answering or sentiment analysis systems.

In summary, I would emphasize that theory is certainly useful: not only to fully understand the world, but also to provide insights for tinkering. That said, I believe it is and will continue to be limited, so we cannot dismiss tinkering as some sort of inferior approach to do science/engineering. Daniel Lemire also notes that tinkering is dangerous and it is hard to disagree. The dangers need to be mitigated. However, I do not think it is realistic to expect people to wait till fully-formed useful theories appear, in particular, because this depends on tinkerers producing experimental results.