Dynamic programming
Dynamic programming is a method of solving problems, which is used in computer science, mathematics and economics. Using this method, a complex problem is split into simpler problems, which are then solved. At the end, the solutions of the simpler problems are used to find the solution of the original complex problem.
Dynamic programming can be used in cases where it is possible to split a problem into smaller problems, which are all quite similar.
Richard Bellman, a US mathematician, first used the term in the 1940s when he wanted to solve problems in the field of Control theory. He also stated what is now known as Bellman's Principle of Optimality:
An optimal policy has the property that whatever the initial state and
initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision.
– Bellman, 1957
Dynamic Programming Media
Figure 1. Finding the shortest path in a graph using optimal substructure; a straight line indicates a single edge; a wavy line indicates a shortest path between the two vertices it connects (among other paths, not shown, sharing the same two vertices); the bold line is the overall shortest path from start to goal.
Figure 2. The subproblem graph for the Fibonacci sequence. The fact that it is not a tree indicates overlapping subproblems.
References
- R.Bellman,On the Theory of Dynamic Programming,Proc Natl Acad Sci U S A. 1952 August; 38(8): 716–719. [1]