We now look at an important class of algorithms called dynamic programming. These use the principle of optimality which says:
there are subproblems possible for the overall problem
These subproblems can be solved efficiently
These can combine the solutions
This can be done recursively
Note that divide and conquer algorithms were based on the assumption that the subproblems are disjoint.
Also the greedy algorithms differs from the above in that there can be different subproblems possible but it chooses a strategy that does not change for the input.
generally, the sub-problems overlap. This is why we need to find the sub-problem structure.
Secondly, there is no fixed strategy so we may not be able to find a greedy strategy.
This means we have to exhaust all possibilities.
In dynamic programming, we take these constraints as such and try to do it efficiently.
We do this with the following three steps:
1) we come up with a recursive formulation in terms of subproblems
2) we organize the subproblems to make the search efficient.
3) we solve subproblems and keep track of the answers to avoid recomputation.
We will now see an example with the problem of Interval scheduling.
In this problem, we have tasks with predefined start, finish.
We find the subset of non-overlapping tasks and
try to choose the largest such sub-set.
A twist to the above problem is referred to as the weighted scheduling.
In this problem, each task fetches a profit and we try to maximize not the number of jobs but the profit from those jobs.
As we may see, there is no known greedy strategy for this problem. This means we will have to exhaust all combinations and choose the best. This means we have exponentially many combinations.
We will now re-organize the combinations to make the search more efficient.
We will sort the jobs by their finish time where each job ends earlier than the other.
We search based on excluding jobs one at a time. If a job has to be included we look to see what others can be excluded.If we exclude, we look at the remaining sub-problem recursively. The recursion terminates when there is one job and the profit is from that job.
Let MaxProfit(n) denote the max profit for jobs 1 .. n
Let Profit(i) denote the actual profit of job i
Our recursive formulation is as follows:
MaxProfit(k) = max (MaxProfit(k-1), Profit (k) + MaxProfit(j))
J is the latest job that ends before k starts.
and the MaxProfit(1) is the profit(1)
The first component in the recursive formulation above is from the exclusion of job j and the second component is from the inclusion of job k.
there are subproblems possible for the overall problem
These subproblems can be solved efficiently
These can combine the solutions
This can be done recursively
Note that divide and conquer algorithms were based on the assumption that the subproblems are disjoint.
Also the greedy algorithms differs from the above in that there can be different subproblems possible but it chooses a strategy that does not change for the input.
generally, the sub-problems overlap. This is why we need to find the sub-problem structure.
Secondly, there is no fixed strategy so we may not be able to find a greedy strategy.
This means we have to exhaust all possibilities.
In dynamic programming, we take these constraints as such and try to do it efficiently.
We do this with the following three steps:
1) we come up with a recursive formulation in terms of subproblems
2) we organize the subproblems to make the search efficient.
3) we solve subproblems and keep track of the answers to avoid recomputation.
We will now see an example with the problem of Interval scheduling.
In this problem, we have tasks with predefined start, finish.
We find the subset of non-overlapping tasks and
try to choose the largest such sub-set.
A twist to the above problem is referred to as the weighted scheduling.
In this problem, each task fetches a profit and we try to maximize not the number of jobs but the profit from those jobs.
As we may see, there is no known greedy strategy for this problem. This means we will have to exhaust all combinations and choose the best. This means we have exponentially many combinations.
We will now re-organize the combinations to make the search more efficient.
We will sort the jobs by their finish time where each job ends earlier than the other.
We search based on excluding jobs one at a time. If a job has to be included we look to see what others can be excluded.If we exclude, we look at the remaining sub-problem recursively. The recursion terminates when there is one job and the profit is from that job.
Let MaxProfit(n) denote the max profit for jobs 1 .. n
Let Profit(i) denote the actual profit of job i
Our recursive formulation is as follows:
MaxProfit(k) = max (MaxProfit(k-1), Profit (k) + MaxProfit(j))
J is the latest job that ends before k starts.
and the MaxProfit(1) is the profit(1)
The first component in the recursive formulation above is from the exclusion of job j and the second component is from the inclusion of job k.
No comments:
Post a Comment