Dynamic Programming Overview

Dynamic Programming Overview

Breaking Down the Dynamic Programming Approach

Dynamic Programming (DP) is an algorithmic technique used to solve problems by breaking them down into smaller sub-problems and solving each sub-problem only once, storing the solution to each sub-problem to avoid repeated computations. The solutions to the sub-problems are then combined to solve the original problem.

The key idea behind dynamic programming is to break a problem into sub-problems and solve each sub-problem only once. If a sub-problem appears more than once, its solution is stored and reused instead of being recomputed. This approach can significantly improve the efficiency of algorithms and make them more practical for solving real-world problems.

Dynamic programming is often used for optimization problems where there are many possible solutions, and we need to find the best one. The process involves:

  1. Defining the problem and identifying its sub-problems.

  2. Finding the optimal solution to each sub-problem.

  3. Combining the solutions to the sub-problems to find the optimal solution to the original problem.

Dynamic programming can be used in a wide range of applications, such as computer vision, robotics, finance, and bioinformatics. Examples of dynamic programming problems include the Knapsack problem, the Longest Common Subsequence problem, and the Fibonacci sequence.

Overall, dynamic programming is a powerful tool that can be used to solve complex problems efficiently and effectively by breaking them down into smaller, more manageable sub-problems.

Knapsack problem

Given a set of items, each with a weight and a value, and a knapsack with a maximum weight capacity, determine the maximum value subset of the items that can fit into the knapsack.

The problem is often formulated as an integer programming problem or a dynamic programming problem, and it is classified as an NP-hard problem, meaning that there is no known efficient algorithm to solve it for large inputs. Therefore, many approximate algorithms have been developed to solve the problem for practical applications.

This problem can be solved using dynamic programming by breaking it down into sub-problems. The basic idea is to create a two-dimensional table where each row represents an item, and each column represents the capacity of the knapsack. The value in each cell of the table represents the maximum value that can be obtained with the given capacity and the items up to that row.

#include <iostream>
#include <vector>

using namespace std;

int knapsack(int W, vector<int> wt, vector<int> val, int n)
{
    vector<vector<int>> dp(n+1, vector<int>(W+1));

    for(int i=0; i<=n; i++) {
        for(int w=0; w<=W; w++) {
            if(i==0 || w==0) {
                dp[i][w] = 0;
            }
            else if(wt[i-1] <= w) {
                dp[i][w] = max(val[i-1] + dp[i-1][w-wt[i-1]], dp[i-1][w]);
            }
            else {
                dp[i][w] = dp[i-1][w];
            }
        }
    }
    return dp[n][W];
}

int main()
{
    vector<int> wt = {10, 20, 30};
    vector<int> val = {60, 100, 120};
    int W = 50;
    int n = wt.size();

    int max_val = knapsack(W, wt, val, n);
    cout << "Maximum value: " << max_val << endl;

    return 0;
}

The time complexity of the dynamic programming solution is O(NW), where N is the number of items and W is the capacity of the knapsack. This solution is much faster than the brute-force solution that checks every possible subset of items, which has a time complexity of O(2^N).

The Knapsack problem has many practical applications, such as resource allocation, portfolio optimization, and production planning.

The Longest Common Subsequence problem

Given two sequences, find the longest subsequence that is common to both sequences. A subsequence is a sequence that can be derived from another sequence by deleting some or no elements without changing the order of the remaining elements.

For example, given the two sequences "ABCDGH" and "AEDFHR", the longest common subsequence is "ADH", with a length of 3.

The LCS problem can be solved using dynamic programming. A dynamic programming table is constructed with the lengths of the LCS of all pairs of prefixes of the two sequences. The table is filled in a bottom-up manner, starting with the base case of empty prefixes, and using the recurrence relation to compute the lengths of longer prefixes. The final entry in the table gives the length of the LCS, and the LCS itself can be reconstructed by backtracking through the table.

#include <iostream>
#include <string>
#include <algorithm>
using namespace std;

// function to longest commen subsequence
int lcs(string X, string Y) {
    int m = X.length();
    int n = Y.length();
    int L[m + 1][n + 1];

    //using nested loops we got 
    for (int i = 0; i <= m; i++) {
        for (int j = 0; j <= n; j++) {
            if (i == 0 || j == 0) {
                L[i][j] = 0;
            } else if (X[i - 1] == Y[j - 1]) {
                L[i][j] = L[i - 1][j - 1] + 1;
            } else {
                L[i][j] = max(L[i - 1][j], L[i][j - 1]);
            }
        }
    }
    return L[m][n];
}

int main() {
    string X = "ABCDGH";
    string Y = "AEDFHR";
    int result = lcs(X, Y);
    cout << "Length of LCS is " << result << endl;
    return 0;
}

In this code, the lcs function takes two strings X and Y as input and returns the length of the longest common subsequence of the two strings. The function constructs a dynamic programming table L with the lengths of the LCS of all pairs of prefixes of the two strings and fills it in a bottom-up manner using a nested loop. The final entry in table L[m][n] gives the length of the LCS, which is returned as the output of the function.

In the worst case, where the lengths of the two sequences are significantly different, the time complexity of the algorithm could be as high as O(m^2n) or O(mn^2).

Conclusion

Overall, dynamic programming is a powerful tool that can be used to solve complex problems efficiently and effectively by breaking them down into smaller, more manageable sub-problems.