Many software developers and computer scientists are familiar with a concept of dynamic programming. Despite its arcane and intimidating name, dynamic programming is a rather simple technique to solve complex recursively defined problems. It works by reducing a problem to a bunch of overlapping subproblems each of which can be further processed recursively. The existence of overlapping subproblems is what differentiates dynamic programming from other recursive approaches such as divide-and-conquer. An ostensibly drab mathematical issue, dynamic programming has a remarkable history.
The approach originated from discrete-time optimization problems studied by R. Bellman in 1950s and was later extended to a wider range of tasks, not necessarily related to optimization. One classic example is the Fibonacci numbers Fn = Fn-1 + Fn-2. It is straightforward to compute the Fibonacci numbers one by one in the order of increasing n, as well as to memorize the results on the go. In that, computation of Fn-2 is a shared subproblem whose solution is required to obtain both Fn-1 and Fn. Clearly this is just a math trick unrelated to programming. How come was it named so strangely?
There is a dramatic, but likely untrue, explanation given by R. Bellman in his autobiography:
"The 1950s were not good years for mathematical research. We had a very interesting gentleman in Washington named Wilson. He was secretary of Defense, and he actually had a pathological fear and hatred of the word ‘research’. I'm not using the term lightly; I'm using it precisely. His face would suffuse, he would turn red, and he would get violent if people used the term ‘research’ in his presence. You can imagine how he felt, then, about the term ‘mathematical’ … Hence, I felt I had to do something to shield Wilson and the Air Force from the fact that I was really doing mathematics inside the RAND Corporation.
What title, what name, could I choose? In the first place I was interested in planning, in decision making, in thinking. But planning, is not a good word for various reasons. I decided therefore to use the word ‘programming’. I wanted to get across the idea that this was dynamic, this was multistage, this was time-varying—I thought, let’s kill two birds with one stone. Let’s take a word that has an absolutely precise meaning, namely ‘dynamic’, in the classical physical sense … Thus, I thought ‘dynamic programming’ was a good name. It was something not even a Congressman could object to. So I used it as an umbrella for my activities."
(From Stuart Dreyfus, Richard Bellman on the Birth of Dynamic Programming)
This anecdote, though, is easy to disprove. There is published evidence that the term dynamic programming was coined in 1952 (or earlier), whereas Wilson became the secretary of defense in 1953. Wilson held a degree in electrical engineering from Carnegie Mellon. Before 1953, he was a CEO of a major techology company General Motors, while at early stages of his career he supervised development of various electrical equipment. It is, therefore, hard to believe that this man could trully hate the word "research". (The observation on the date mismatch was originally made by Russell & Norvig in their book on artificial intelligence.)
Furthermore, linear programming (which also has programming in its name), appears in the papers of G. Dantzig before 1950. A confusing term "linear programming", as Dantzig explained in his book, was based on the military definition of the word "program", which simply means planning and logistics. In mathematics, this term was adopted to denote optimization problems and gave rise to several names such as integer, convex, and non-linear programming.
It should now be clear that the birth of dynamic programming was far less dramatic: R. Bellman simply picked up the standard terminology and embellished it with the adjective "dynamic", to highlight the temporal nature of the problem. There was nothing unusual in the choice of the word "dynamic" either: The notion of dynamic(al) system (a system with time-dependent states) comes from physics and was widely used already in the 19th century.
Dynamic programming is very important to computational biology and approximate string searching. Both domains use string similarity functions that are variants of the Levenshtein distance. The Levenshtein distance was formally published in 1966 (1965 in Russian). Yet, it took almost 10 years for the community to fully realize how dynamic programming could be used to compute the string similarity functions. This is an interesting story that is covered in a follow-up post.
Edited by Anna Belova
Comments
Although computer scientists commonly teach that dynamic programming has nothing to do with dynamical systems, in fact it can be used to compute optimal control algorithms for them. (I have no idea if this had any contribution to the naming of "dynamic programming", though.)
Norman, thank you for stopping by. Bellman did work on dynamical systems: http://en.wikipedia.org/wiki/Bellman_equation A good question is whether it influenced the name. In my opinion: yes, but he apparently tried to obfuscate the issue, so w don't know this for sure.