Back to Course

Algorithm

0% Complete
0/82 Steps
  1. Getting Started with Algorithm
    What is an Algorithm?
  2. Characteristics of Algorithm
    1 Topic
  3. Analysis Framework
  4. Performance Analysis
    3 Topics
  5. Mathematical Analysis
    2 Topics
  6. Sorting Algorithm
    Sorting Algorithm
    10 Topics
  7. Searching Algorithm
    6 Topics
  8. Fundamental of Data Structures
    Stacks
  9. Queues
  10. Graphs
  11. Trees
  12. Sets
  13. Dictionaries
  14. Divide and Conquer
    General Method
  15. Binary Search
  16. Recurrence Equation for Divide and Conquer
  17. Finding the Maximum and Minimum
  18. Merge Sort
  19. Quick Sort
  20. Stassen’s Matrix Multiplication
  21. Advantages and Disadvantages of Divide and Conquer
  22. Decrease and Conquer
    Insertion Sort
  23. Topological Sort
  24. Greedy Method
    General Method
  25. Coin Change Problem
  26. Knapsack Problem
  27. Job Sequencing with Deadlines
  28. Minimum Cost Spanning Trees
    2 Topics
  29. Single Source Shortest Paths
    1 Topic
  30. Optimal Tree Problem
    1 Topic
  31. Transform and Conquer Approach
    1 Topic
  32. Dynamic Programming
    General Method with Examples
  33. Multistage Graphs
  34. Transitive Closure
    1 Topic
  35. All Pairs Shortest Paths
    6 Topics
  36. Backtracking
    General Method
  37. N-Queens Problem
  38. Sum of Subsets problem
  39. Graph Coloring
  40. Hamiltonian Cycles
  41. Branch and Bound
    2 Topics
  42. 0/1 Knapsack problem
    2 Topics
  43. NP-Complete and NP-Hard Problems
    1 Topic
Lesson 32 of 43
In Progress

General Method with Examples

Dynamic Programming is the most powerful design technique for solving optimization problems.

Divide & Conquer algorithm partition the problem into disjoint subproblems solve the subproblems recursively and then combine their solution to solve the original problems.

Dynamic Programming is used when the subproblems are not independent, e.g. when they share the same subproblems. In this case, divide and conquer may do more work than necessary, because it solves the same sub problem multiple times.

Dynamic Programming solves each subproblems just once and stores the result in a table so that it can be repeatedly retrieved if needed again.

Dynamic Programming is a Bottom-up approach- we solve all possible small problems and then combine to obtain solutions for bigger problems.

Dynamic Programming is a paradigm of algorithm design in which an optimization problem is solved by a combination of achieving sub-problem solutions and appearing to the “principle of optimality“.

Characteristics of Dynamic Programming:

Dynamic Programming works when a problem has the following features:-

This image has an empty alt attribute; its file name is 1-12.png
  • Optimal Substructure: If an optimal solution contains optimal sub solutions then a problem exhibits optimal substructure.
  • Overlapping subproblems: When a recursive algorithm would visit the same subproblems repeatedly, then a problem has overlapping subproblems.

If a problem has optimal substructure, then we can recursively define an optimal solution. If a problem has overlapping subproblems, then we can improve on a recursive implementation by computing each subproblem only once.

If a problem doesn’t have optimal substructure, there is no basis for defining a recursive algorithm to find the optimal solutions. If a problem doesn’t have overlapping sub problems, we don’t have anything to gain by using dynamic programming.

If the space of subproblems is enough (i.e. polynomial in the size of the input), dynamic programming can be much more efficient than recursion.

Elements of Dynamic Programming

There are basically three elements that characterize a dynamic programming algorithm:-

This image has an empty alt attribute; its file name is 2-8.png
  1. Substructure: Decompose the given problem into smaller subproblems. Express the solution of the original problem in terms of the solution for smaller problems.
  2. Table Structure: After solving the sub-problems, store the results to the sub problems in a table. This is done because subproblem solutions are reused many times, and we do not want to repeatedly solve the same problem over and over again.
  3. Bottom-up Computation: Using table, combine the solution of smaller subproblems to solve larger subproblems and eventually arrives at a solution to complete problem.

Note: Bottom-up means:-

  1. Start with smallest subproblems.
  2. Combining their solutions obtain the solution to sub-problems of increasing size.
  3. Until solving at the solution of the original problem.

Components of Dynamic programming

This image has an empty alt attribute; its file name is 3-7.png
  1. Stages: The problem can be divided into several subproblems, which are called stages. A stage is a small portion of a given problem. For example, in the shortest path problem, they were defined by the structure of the graph.
  2. States: Each stage has several states associated with it. The states for the shortest path problem was the node reached.
  3. Decision: At each stage, there can be multiple choices out of which one of the best decisions should be taken. The decision taken at every stage should be optimal; this is called a stage decision.
  4. Optimal policy: It is a rule which determines the decision at each stage; a policy is called an optimal policy if it is globally optimal. This is known as Bellman principle of optimality.
  5. Given the current state, the optimal choices for each of the remaining states does not depend on the previous states or decisions. In the shortest path problem, it was not necessary to know how we got a node only that we did.
  6. There exist a recursive relationship that identify the optimal decisions for stage j, given that stage j+1, has already been solved.
  7. The final stage must be solved by itself.

Development of Dynamic Programming Algorithm

It can be broken into four steps:

  1. Characterize the structure of an optimal solution.
  2. Recursively defined the value of the optimal solution. Like Divide and Conquer, divide the problem into two or more optimal parts recursively. This helps to determine what the solution will look like.
  3. Compute the value of the optimal solution from the bottom up (starting with the smallest subproblems)
  4. Construct the optimal solution for the entire problem form the computed values of smaller subproblems.

Applications of dynamic programming

  1. 0/1 knapsack problem
  2. Mathematical optimization problem
  3. All pair Shortest path problem
  4. Reliability design problem
  5. Longest common subsequence (LCS)
  6. Flight control and robotics control
  7. Time-sharing: It schedules the job to maximize CPU usage

Basic Example

EXAMPLE 1: Coin-row problem There is a row of n coins whose values are some positive integers c1, c2,…,cn, not necessarily distinct. The goal is to pick up the maximum amount of money subject to the constraint that no two coins adjacent in the initial row can be picked up.

Let F (n) be the maximum amount that can be picked up from the row of n coins. To derive a recurrence for F (n), we partition all the allowed coin selections into two groups: those that include the last coin and those without it. The largest amount we can get from the first group is equal to cn + F (n − 2)—the value of the nth coin plus the maximum amount we can pick up from the first n − 2 coins. The maximum amount we can get from the second group is equal to F (n − 1) by the definition of F (n). Thus, we have the following recurrence subject to the obvious initial conditions:

KodNest Capture65
Formula 1

We can compute F (n) by filling the one-row table left to right in the manner similar to the way it was done for the nth Fibonacci number by Algorithm Fib(n).

ALGORITHM CoinRow(C[1..n])
//Applies formula 1 bottom up to find the maximum amount of money
//that can be picked up from a coin row without picking two adjacent coins
//Input: Array C[1..n] of positive integers indicating the coin values
//Output: The maximum amount of money that can be picked up
F[0]← 0; F[1]← C[1]
for i ← 2 to n do
F[i]← max(C[i] + F[i − 2], F[i − 1])
return F[n]

The application of the algorithm to the coin row of denominations 5, 1, 2, 10, 6, 2 is shown in Figure 1. It yields the maximum amount of 17. It is worth pointing out that, in fact, we also solved the problem for the first i coins in the row given for every 1 ≤ i ≤ 6. For example, for i = 3, the maximum amount is F (3) = 7.

To find the coins with the maximum total value found, we need to backtrace the computations to see which of the two possibilities—cn + F (n − 2) or F (n − 1)—produced the maxima in formula (8.3). In the last application of the formula, it was the sum c6 + F (4), which means that the coin c6 = 2 is a part of an optimal solution. Moving to computing F (4), the maximum was produced by the sum c4 + F (2), which means that the coin c4 = 10 is a part of an optimal solution as well. Finally, the maximum in computing F (2) was produced by F (1), implying that the coin c2 is not the part of an optimal solution and the coin c1 = 5 is. Thus, the optimal solution is {c1, c4, c6}. To avoid repeating the same computations during the backtracing, the information about which of the two terms in (8.3) was larger can be recorded in an extra array when the values of F are computed.

KodNest Capture66
FIGURE 1: Solving the coin-row problem by dynamic programming for the coin row
5, 1, 2, 10, 6, 2.

New Report

Close