You are on page 1of 51

Modified by Dr.

ISSAM ALHADID
20/3/2019
 General Method
 Multistage Graphs
 Traveling Salesperson Problem (TSP)
 For many other problems, it is not possible to
make stepwise decisions (based only on local
information) leading to an optimal sequence.
 One way to solve problems for which it is not
possible to make a sequence of stepwise
decisions leading to an optimal sequence is
to try out all possible decision sequences.
 We could enumerate (specify/list) all decision
sequences and pick out the best.

 Dynamic Programming reduces amount of


enumeration by avoiding enumeration (list) of
some decision sequences that cannot be
optimal.
 The DP method is based on the principle of
optimality.
 Principle of optimality: An optimal sequence
of decisions has the property that whatever
the initial state and decision are, the
remaining decisions must establish an
optimal decision sequence with regard to the
state resulting from the first decision.
 Dynamic programming approach is similar to
divide and conquer in breaking down the
problem into smaller and yet smaller possible
sub-problems. But unlike, divide and
conquer, these sub-problems are not solved
independently. Rather, results of these
smaller sub-problems are remembered and
used for similar or overlapping sub-
problems.
 Divide-and-conquer algorithms partition the
problem into independent subproblems,
solve the subproblems recursively, and then
combine their solutions to solve the original
problem.
 Dynamic programming is applicable when the
subproblems are not independent.
 A dynamic-programming algorithm solves
every subsubproblem just once and then
saves its answer in a table, thereby avoiding
the work of re-computing the answer every
time the subsubproblem is encountered.
 Dynamic programming is typically applied to
optimization problems. In such problems
there can be many possible solutions. Each
solution has a value, and we wish to find a
solution with the optimal (minimum or
maximum) value.
 The development of a dynamic-programming
algorithm can be broken into a sequence of
four steps.
1. Characterize the structure of an optimal solution.
2. Recursively define the value of an optimal solution.
3. Compute the value of an optimal solution in a
bottom-up fashion.
4. Construct an optimal solution from computed
information.
 Dynamic programming can be used in both
top-down and bottom-up manner. And of
course, most of the times, referring to the
previous solution output is cheaper than
re-computing in terms of CPU cycles.
 Dynamic Programming (DP) is a way of
improving on inefficient divide-and-conquer
algorithms. By “inefficient ”, we mean that the
same recursive call is made over and over.

 If same sub-problem is solved several times,


we can use table to store result of a sub-
problem the first time it is computed, and
thus never have to re-compute it again.
 Alternatively, we can think about filling up a
table of subproblem solutions from the
bottom-up.
 The difference between greedy method and
dynamic programming method is that in greedy
method only one decision sequence is ever
generated. Whereas, in dynamic programming,
many decision sequences may be generated.
 Dynamic Programming is bottom-up method,
while greedy is top-down method.
 Greedy algorithms tend to be easier to code.
 However, sequences containing sub-optimal
sub-sequences cannot be optimal (if the
principal of optimality holds) and so will not be
generated.
 Principle of Optimality: Definition: A problem is said to
satisfy the Principle of Optimality if the sub-solutions of
an optimal solution of the problem are themselves optimal
solutions for their sub-problems.
 Examples:
◦ The shortest path problem satisfies the Principle of Optimality.
◦ This is because if a,x1,x2,...,xn,b is a shortest path from node a
to node b in a graph, then the portion of xi to xj on that path is a
shortest path from xi to xj.
◦ The longest path problem, on the other hand, does not satisfy the
Principle of Optimality. Take for example the undirected graph G
of nodes a, b, c, d, and e, and edges (a,b) (b,c) (c,d) (d,e) and (e,a).
That is, G is a ring. The longest (noncyclic) path from a to d to
a,b,c,d. The sub-path from b to c on that path is simply the edge
b,c. But that is not the longest path from b to c. Rather, b,a,e,d,c
is the longest path. Thus, the subpath on a longest path is not
necessarily a longest path.
 DP is used to solve problems with the
following characteristics:
 Simple Sub-problems:
◦ We should be able to break the original problem to
smaller sub-problems that have the same structure.
 Optimal Sub-structure of the problems:
◦ The optimal solution to the problem contains within
its optimal solutions to sub-problems.
 Overlapping sub-problems:
◦ There exist some places where we solve the same
sub-problem more than once.
 Thus, because of the use of the principal of
optimality, decision sequences containing
sub-sequences that are suboptimal are not
considered.
 The total number of different decision
sequences is exponential in the number of
decisions; that is:
◦ If there are d choices for each of the n decisions to
be made, then there are dn possible decision
sequences.
 Thus, dynamic programming algorithms
often have a polynomial complexity.
 Another important feature of DP approach is
that optimal solutions to sub-problems are
retained/saved so as to avoid re-computing
their values.

 The use of these tabulated values makes it


natural to reshape the recursive equations
into an iterative program
 Examples using dynamic programming
methods:
◦ Mutistage graphs
◦ All Pairs Shortest Paths
◦ Optimal Binary Search Trees
◦ 0/1 Knapsack
◦ Traveling Salesperson Problem
◦ Flow Shop Scheduling
 Multistage Graph Problem.
 Multistage Graph Example.
 Multistage Graph – Forward Approach
◦ Multistage Graph Formulation.
◦ Multistage Graph Example.
◦ Multistage Graph Algorithm.
◦ Algorithm’s Time and Space.
 Multistage Graph – Backward Approach
◦ Multistage Graph Formulation.
◦ Multistage Graph Example.
◦ Multistage Graph Algorithm.
◦ Algorithm’s Time and Space.
 Multistage Graph (Shortest Path) A Multistage
graph is a directed graph in which the nodes can
be divided into a set of stages such that all edges
are from a stage to next stage only (In other words
there is no edge between vertices of same stage
and from a vertex of current stage to previous
stage).
 Multistage graph is a subset selection problem.
 A multistage graph G=(V, E) is a directed graph
in which the vertices are partitioned into k ≥ 2
disjoint sets Vi , 1 ≤ i ≤ k.
 Moreover, if (u, v) is an edge in E then u in Vi
and v in Vi+1 for some i, 1 ≤ i ≤ k.
 The sets V1 and Vk are such that |V1| = |Vk|=1.
 Let s and t respectively be the vertex in V1 and in
Vk, where s is the source and t is the sink.
 Let c(i, j) be the cost of edge (i, j), where the cost
of a path from s to t is the sum of the costs of the
edges on the path.
 The multistage graph problem is to find a
minimum cost path from s to t, where each set Vi
defines a stage in the graph.
 Every path from s to t starts in stage 1, goes to
stage 2, then to stage 3, etc. and eventually
terminates in stage k.
 Figure 1: A 5 stage graph.
 A minimum cost s to t path is indicated by
the dark edges.
 A dynamic programming formulation for k
stage graph problem is obtained by first
noticing that every s to t path is a result of
sequence of k-2 decisions.
 The ith decision involves determining which
vertex in Vi+1, 1 ≤ i ≤ k-2, is to be on the
path.
 It is easy to see that the principle of
optimality holds.
◦ Let P(i, j) be a minimum cost path from vertex j in
Vi to vertex t.
◦ Let COST(i, j) be the cost of this path.
◦ Then, using the forward approach, we obtain:
 Using the previous formulation (forward approach)
on Figure 1, we obtain the following:
 Thus, a minimum cost s to t path has a cost
of 16.
 The previous minimum cost path has can be
determined easily if we record the decision
(D) made at each state (vertex).

vertex Vertex that


Stage
gives min. cost
 Assume n vertices in V are indexed 1 through n.
 Indices are assigned in order of stages.
 First, s is assigned index 1, then vertices in V2
are assigned indices, then vertices in V3 and so
on, then t has index n.
 Thus, indices assigned to vertices in Vi+1 are
bigger than those assigned to vertices in Vi.
 Therefore, COST and Decision (D) may be
computed in the order n-1, n-2, …, 1.
 The first subscript in COST, Path (P), and D only
identifies the stage number and is omitted in the
algorithm.
 Algorithm’s Time Complexity:
◦ If the input graph G is represented by its
adjacency lists, then r in line 4 may be
found in time proportional to the degree
of vertex j.
◦ If G has e edges then the time for the for
loop of lines 3 to 7 is Θ(n+e).
◦ The time for the for loop of lines 9 to 11
is Θ(k).
◦ Hence, the time complexity is Θ(n+e).
 Algorithm’s Space:
◦ In addition to the space needed for the
input, space is needed for COST, D, and P.
◦ As an exercise find the total space needed
◦ (H.W.)
 Let BP(i, j) be a minimum cost path from vertex s
to j in Vi.
 Let BCOST(i, j) be the cost of BP(i, j).
 From Backward approach we obtain:

 Since BCOST(2, j) = c(1, j) if (1, j) in E and


BCOST(2, j)=∞ if (1, j) not in E, then BCOST(i, j)
may be computed using Eq-2 by first computing
BCOST for i = 3, then i = 4, etc.
 This backward approach algorithm has same
time complexity as forward approach
algorithm provided G is now represented by its
inverse adjacency lists.
◦ That is for each vertex v we have a list of vertices w
such that (w, v) in E.
 Exercise: Find the backward approach
algorithm’s space.
 H.W.
 Subset vs. Permutation Problems
 TSP – Problem Statement
 TSP – Problem Formulation
 TSP – DP Formulation
 TSP – DP Algorithm
 TSP – Example
 TSP – Computational Complexity
 TSP is a permutation problem.
 Permutation problems will usually be much
harder to solve than subset problems as there
are n! different permutations of n objects while
there are 2n different subsets of n objects uisng
Dynamic prog.
 (n! > O(2n)).
 This means that using Dynamic programming to
solve the TSP problem is faster than using the
Naive Solution (brute-force search).).
 More details:
https://www.geeksforgeeks.org/travelling-
salesman-problem-set-1/
 Given a directed graph G(V, E), with vertex set
V={1, 2, …, n} and edge set E.
 Application on TSP:
◦ Suppose we have to route a postal van to pick up
mail from mail boxes located at n different sites.
◦ An n+1 vertex graph may be used to represent the
situation.
◦ One vertex represents the post office from which
the postal van starts and to which it must return.
◦ Edge (i, j) is assigned a cost equal to the distance
from site i to site j.
◦ The route taken by the postal van is a tour and we
are interested in finding a tour of minimum length.
 Let V={1, 2, …, n} be the vertices of G.
Φ  Empty set
 Consider the following directed graph, where
the edge lengths are given by matrix c.

You might also like