SNS COLLEGE OF TECHNOLOGY
(An Autonomous Institution)
COIMBATORE – 35
DEPARTMENT OF COMPUTER SIENCE AND ENGINEERING (UG & PG)
Second Year Computer Science and Engineering, 4^{th} Semester
UNIVERSITY QUESTIONS
Subject Code & Name: IT204 / Design and Analysis of Algorithm
Prepared by: A.Chandrasekar, ASP/CSE, S.R.Janani, AP/CSE, U.Supriya, AP/CSE
UNIT I  INTRODUCTION
PART  A

Define Big ‘Oh’ notation. (May/June 2012)
The function f(n) = O(g(n)) iff there exist positive constants C and no such that f(n)£ C * g(n) for all n, n ³n0.

What is meant by linear search? (May/June 2012)
Linear search, also known as sequential search, is a process that checks every element in the list sequentially until the desired element is found. The computational complexity for linear search is O(n), making it generally much less efficient than binary search (O(log n)). But when list items can be arranged in order from greatest to least and the probabilities appear as geometric distribution (f (x)=(1p) x1p, x=1,2), then linear search can have the potential to be notably faster than binary search.

Differentiate Time complexity from Space complexity. (Nov/Dec 2012)
The time complexity of an algorithm is the amount of computer time it needs to run to completion.
The space complexity of an algorithm is the amount of memory it needs to run to completion.

What is a Recurrence Equation? (Apr/May 2010)
A recurrence equation is an equation or inequality that describes a function in terms of its value on smaller inputs.

What is the use of asymptotic notations? (Apr/May 2011)
It is used to describe the limiting behavior of a function when the argument tends towards a particular value (often infinity), usually in terms of simpler functions. In computational complexity theory, big O notation is used to classify algorithms by how they respond (e.g., in their processing time or working space requirements) to changes in input size.

Define algorithm validation. (Nov/Dec 2012)
Once an algorithm has been devised it become necessary to show that it works it computer the correct to all possible, legal input. One simply way is to code into a program. However converting the algorithm into program is a time consuming process. Hence, it is essential to be reasonably sure about the effectiveness of the algorithm before it is coded. This process, at the algorithm level, is called "validation".

State the smoothness rule. (Nov/Dec 2012)
Definition:
f :N→≥0 is eventually nondecreasing
if there is an integer n 0 so that f ( n ) ≤ f ( n +1)
for n ≥ n 0.
Definition:
Let b ≥ 2beaninteger.
f is bsmooth if it is eventually nondecreasing and
f ( bn ) ∈ O ( f ( n )). That is, there are n 0 and c so that
f ( bn ) Example: If a,b> 0 and t ( n )is defined by
t ( n )= {a if n =1
{4 t ( n / 2 ) + bn otherwise,
Show t ( n )isΘ( n 2 ).

What do you mean by algorithm? (May/June 2013)
A program is the expression of an algorithm in a programming language. Sometimes works such as procedure, function and subroutine are used synonymously program.

Define time efficiency and space efficiency. (May/June 2012)
The time complexity of an algorithm is the amount of computer time it needs to run to completion.
The space complexity of an algorithm is the amount of memory it needs to run to completion.

What is called Substitution method? (May/June 2012)
Guess a bound and then use mathematical induction to prove our guess correct.

Forward substitution

Backward substitution

What is best case efficiency? (May/June 2012)
Best case efficiency is the minimum number of steps that an algorithm can take any collection of data values.
PART  B

Discuss briefly the sequence of steps in designing and analyzing an algorithm. (6) (Apr/May 2011)

Space complexity

Time complexity

Measuring input size

Measuring running time

Computing best, average and worst case

Computing order of growth of algorithms

Explain the necessary steps for analyzing the efficiency of recursive algorithm. Derive the recurrence relation for Tower of Hanoi problem. (10) (Apr/May 2011)
There are 3 pegs ‘from’ , ‘using ‘ and ‘to’. Some disks of different sizes are given which can slide onto any peg . Initially all of those are in ‘from’peg in order of size with largest disk at the bottom and smallest disk at the top. We have to move all the disks from ‘from’ peg to ‘to’ peg . At the end ,’to’ peg will have disks in the same order of size .
There are some rules :
1) only one disk can be moved from one peg to another peg at a time.
2) A disk can be placed only on top of a larger one .
3) A disk can be moved from top only.
The function is:
void Towerof Hanoi(int n, String
from, String using, String to)
{
if(n>0)
{
Towerof Hanoi(n1, from, to, using);
System.out.println(“Move disk ”+n+” from ”+from+” to ”+to);
Towerof Hanoi(n1, using, from, to);
}
}
Let the time required for n disks is T(n) .
There are 2 recursive call for n1 disks and one constant time operation to move a disk from ‘from’ peg to ‘to’ peg. Let it be k1.
Therefore,
T(n) = 2 T(n1) + k1
T(0) = k2
, a constant.
T(1) = 2 k2 + k1
T(2) = 4 k2 + 2k1 + k1
T(2) = 8 k2 + 4k1 + 2k1 + k1
Coefficient of k1=2n
Coefficient of k2=2n1
Time complexity is O(2n)

Give the algorithm to compute factorial of n recursively. (May/June 2012)
Recursive Algorithm for Factorial Function
When it came to teaching recursion in programming languages in the 1980s and 1990s, the factorial function n!nn! was the classic example to explain the concept.
Usually left unexplained, in a mathematical paper or book one might encounter an explanation for the n!nn! shorthand for this function along the lines of

n!=∏i=1ni,nsuperscriptsubscriptproducti1nin!=\prod_{{i=1}}^{n}i,


(with 0!=1010!=1) while a computer programming book might say something like n!=1×2×…×(n1)×nn12normal…n1nn!=1\times 2\times\ldots\times(n1)\times n (with the same assignment for zero factorial). Either of these suggests implementation with some kind of FOR loop.
The recurrence relation n!=(n1)!nnn1nn!=(n1)!n with n>1n1n>1 and 1!=1111!=1 suggests a recursive implementation.
Call the recursive factorial algorithm with an integer N.
Test if N <= 0. If so, return 1.
If not, then call the recursive factorial algorithm with N  1, multiply the result by N and return that value.
The algorithm calls itself and some mechanism is necessary for keeping track of the state of the computation. Here’s an implementation in the PASCAL programming language from Koffman’s 1981 book:
FUNCTION FACTORIAL (N: INTEGER): INTEGER
(* RECURSIVE COMPUTATION OF N FACTORIAL *)
BEGIN
(* TEST FOR STOPPING STATE *)
IF N <= 0 THEN
FACTORIAL := 1
ELSE
FACTORIAL := N * FACTORIAL(N  1)
END; (* FACTORIAL *)
Depending on the implementation, what would happen the first time FACTORIAL(N) calls itself is that the memory address of the function together with n1n1n1 would be pushed on to the stack. The next time n2n2n2 would be pushed on the stack, and so on and so forth until 0 is reached. At this point, the last instance of the function returns, the nexttolast instance pops a 1 off the stack and multiplies it by 2, the nexttonexttolast instance pops a 2 off the stack and multiplies it by 3, pushes a 6, and so on and so forth until the first instance pops (n1)!n1(n1)! off the stack and multiplies it by nnn.
The following table illustrates a sample run starting with N = 7:

Action

N

Return value

Call factorial function with value

7

Undefined

Call factorial function with value

6

Undefined

Call factorial function with value

5

Undefined

Call factorial function with value

4

Undefined

Call factorial function with value

3

Undefined

Call factorial function with value

2

Undefined

Call factorial function with value

1

Undefined

Call factorial function with value

0


Return a 1


1

Multiply returned value by N and return that

1

1

Multiply returned value by N and return that

2

2

Multiply returned value by N and return that

3

6

Multiply returned value by N and return that

4

24

Multiply returned value by N and return that

5

120

Multiply returned value by N and return that

6

720

Multiply returned value by N and return that

7

5040

This kind of recursion can exhaust memory (for stack space) well before any computations are performed. However, in this specific application, because factorials grow super exponetially, the bounding for integer capacity is usually far more restricting than the memory capacity. For example, using 32bit unsigned integers and guesstimating each function call requires 16 bytes, the computation of 13! would require just 208 bytes on the stack, but the result would require 33 bits, overflowing a 32bit unsigned integer variable. Therefore input sizes should be limited to fit within the bounds of memory and integer capacity.

Find time complexity and space complexity of the following problems. Factorial using recursion and compute nth Fibonacci number using iterative statements. (4+4) (Nov/Dec 2012)

Factorial  O(n)

Fibonacci  O(φn)

What is the worst case and average case efficiency of binary search? (May/June 2012)
Algorithm BinSearch(a,n,x)
//Given an array a[1:n] of elements in non decreasing
// order, n>0, determine whether x is present
{
low : = 1;
high : = n;
while (low < high) do
{
mid : = [(low+high)/2];
if(x < a[mid]) then high:= mid1;
else if (x >a[mid]) then low:=mid + 1;
else return mid;
}
return 0;
}
Time Complexity: O(logn)

Explain the asymptotic notations in detail. (Nov/Dec 2012) (16)
The asymptotic notation “Big on” (O).
The function f(n) = O(g(n)) iff there exist positive constants C and no such that f(n)<=C * g(n) for all n, n ³n_{0}.
The asymptotic notation “Omega” (Ω).
The function f(n) =W (g(n)) iff there exist positive constant C and no such that
f(n) >=C * g(n) for all n, n ³ n_{0}.
The asymptotic notation “theta” (Q).
The function f(n) =q (g(n)) iff there exist positive constant C1, C2, and no such that C_{1} g(n) <= f(n) <= C_{2} g(n) for all n, n ³ n_{0}.

Differentiate the mathematical analysis of recursive and nonrecursive algorithms. (Nov/Dec 2012) (16)
Nonrecursive Algorithms:

Steps in mathematical analysis of nonrecursive algorithms:

Decide on parameter n indicating input size

Identify algorithm’s basic operation

Determine worst, average, and best case for input of size n

Set up summation for C(n) reflecting algorithm’s loop structure

Simplify summation using standard formulas (see Appendix A)
Example:
Selection sort
Input: An array A[0..n1]
Output: Array A[0..n1] sorted in ascending order
for i ← 0 to n2 do
min ← i
for j = i + 1 to n – 1 do
if A[j] < A[min]
min ← j
swap A[i] and A[min]
basic operation: comparison
Inner loop:
n1
S(i) =∑1 = (n1) – (i + 1) + 1 = n – 1  i
j = i+1
Outer loop:
n2 n2 n2 n2
Op = ∑ S(i) = ∑ (n – 1 – i) = ∑ (n – 1)  ∑ i
i = 0 i = 0 i = 0 i = 0
Basic formula:
n
∑ i = n(n+1) / 2
i = 0
Op = (n – 1 )(n 1 ) – (n2)(n1)/2 = (n – 1) [2(n – 1) – (n – 2)] / 2 = = (n – 1) n / 2 =
O(n2)
Recursive Algorithms:
Steps in mathematical analysis of recursive algorithms:

Decide on parameter n indicating input size

Identify algorithm’s basic operation

Determine worst, average, and best case for input of size n

Set up a recurrence relation and initial condition(s) for C(n)the number of times the basic operation will be executed for an input of size n (alternatively count recursive calls)

Solve the recurrence to obtain a closed form or estimate the order of magnitude of the solution (see Appendix B)
Important recurrence types:
Linear: One (constant) operation reduces problem size by one.
T(n) = T(n1) + c
T(1) = d
Solution: T(n) = (n1)c + d
Quadratic: A pass through input reduces problem size by one.
T(n) = T(n1) + cn
T(1) = d
Solution: T(n) = [n(n+1)/2 – 1] c + d
Logarithmic: One (constant) operation reduces problem size by half.
T(n) = T(n/2) + c
T(1) = d
Solution: T(n) = c log n + d
n log n: A pass through input reduces problem size by half.
T(n) = 2T(n/2) + cn
T(1) = d
Solution: T(n) = cn log n + d n
Example 1
n! = n*(n1)!
0! = 1
T(n) = T(n1) + 1
T(1) = 1
Telescoping:
T(n) = T(n1) + 1
T(n1) = T(n2) + 1
T(n2) = T(n3) + 1
…
T(2) = T(1 ) + 1
Add the equations and cross equal terms on opposite sides:
T(n) = T(1) + (n1) = n
Example 2: Binary search
T(n) = T(n/2) + 1
T(n/2) = T(n/4) + 1
…
T(2) = T(1) + 1
Add the equations and cross equal terms on opposite sides:
T(n) = T(1) + log(n) = O(log(n))
Master Theorem: A general divideandconquer recurrence
T(n) = aT(n/b) + f (n), where f (n) Î Θ(nk)
a < bk : T(n) Î Θ(nk)
a = bk : T(n) Î Θ(nk log n )
a > bk : T(n) Î Θ(n (log b a))
Note: the same results hold with O instead of Θ.

Briefly explain the time complexity, space complexity estimation. (May/June 2013) (6)
Time and Space Complexity:
Goals:
This laboratory exercise introduces some principles of algorithm effectiveness, including the amount of time and memory required for the algorithm. BigO notation is introduced to provide an informal measure of the time or space required by an algorithm. These ideas are applied to the linear and binary search algorithms, discussed in the lab on searching.
In considering the solution to a problem, it is natural to ask how effective that solution might be. Also, when comparing solutions, one might wonder if one solution were better than another. Altogether, one might use many criteria to evaluate such solutions, including:
Accuracy:
Of course, any program should produce correct answers. (If we were satisfied with wrong results, it is trivial to produce many such answers very quickly.) However, it may not be immediately clear just how accurate results should be in a specific instance. For example, one algorithm may be simple to program and may run quickly, but it only may be accurate to 5 decimal places. A second algorithm may be more complex and much slower, but may give 15 place accuracy. If 5place accuracy is adequate for a specific application, the first algorithm is the better choice. However, if 10 or 12 place accuracy is required, the slower algorithm must be used.
Efficiency:
Efficiency can be measured in many ways: programmer time, algorithm execution time, memory used, and so on. If a program is to be used once, then programmer time may be a major consideration, and a simple algorithm might be preferred. If a program is to be used many times, however, then it may be worth spending more development time with a complex algorithm, so the procedure will run very quickly.
Use of Memory:
One algorithm may require more computer memory in which to execute. If space is a scarce resource, then the amount of space an algorithm requires should be taken into consideration when comparing algorithms.
Ease of Modification:
It is common practice to modify old programs to solve new problems. A very obscure algorithm that is difficult to understand, therefore. is usually less desirable than one which can be easily read and modified.
For this laboratory exercise, we focus on algorithm execution time.

Write the linear search algorithm and analyze its time complexity. (May/June 2013) (10)
For successful search
Best case O(1)
Worst Case O(n)
Average case O(n)
For Unsuccessful search
Best case O(n)
Worst Case O(n)
Average case O(n)
Solve the following recurrence realtions (Nov/Dec 2012) (4+4)
T(n)=
T(n)=
Where a and c are constants
O(logn)
O(nlogn)

Distinguish between Big Oh, Theta and Omega notations (8) (Nov/Dec 2012)
The difference between Big Oh, Big Omega and Big Theta.
This is written not from the mathematical point of view, but the information technology point of view, so there won’t be any mathematical things in this article. I also will not handle complexity in greater detail than necessary.
Algorithm complexity studies the limiting behaviour of algorithms as the input n approaches infinity. There are three main complexity classes in which algorithms can be placed: Oh, Omega and Theta. Two of them, Oh and Omega can be divided in subclasses: BigOh and LittleOh and BigOmega and LittleOmega.
This article describes an easy but useful mnemonic that can be used to differentiate between the three main classes.
BigOh and LittleOh:
This one helps if you know that the letter is not actually a capital ‘o’, but the Greek capital Omicron. The Omicron looks deceptively much like the capital ‘o’: O. The mnenomic lies in the name of the Greek letter.
To access the mnemonic, Omicron should be read as Omicron. Extended to a sentence, you get: ‘… is micro compared to … as n approaches infinity’.
Let’s take, for example, the following notation: f(n) € (g(n))
Using the mnenomic, the following can be read: “The function f is micro compared to g as n approaches infinity.” This means that, as n approaches infinity, f is asymptotically bounded above by g(up to a constant factor).
Or: f(n) € g(n) k as n approaches infinity.
Be mindful that this means that f(n) could very well equal g(n) for some constant factor as n approaches infinity.
If the two should not be equal, use the tighter o (LittleOh). I use the same mnemonic for this one as for the BigOh, but as the hole in the letter is tighter, it reminds me that o describes a tighter bound which is strictly smaller than its bound.
This corresponds with the definition of f(n) € (g(n)): g(n) asymptotically dominates f(n), or, more mathematically: f(n) < g(n) . k.
BigOmega and LittleOmega:
This mnemonic is a bit easier since the Greek letter is already used. To access it, read the Omega as Omega. Extended to a sentence form, you can read ‘… is mega compared to … as n approaches infinity.’
Let’s take, for example, the following notation:
f(n) € (g(n))
Using the mnemonic, the following can be read: “The function f is mega compared to g as n approaches infinity.” This means that, as n approaches infinity, f is asymptotically bounded below by g(up to a constant factor).
Or: f(n) € g(n) k as n approaches infinity.
Be mindful that this means that f(n) could very well equal g(n) for some constant factor as n approaches infinity.
If the two should not be equal, use the tighter (LittleOmega). I use the same mnemonic for this one as for the BigOmega, but as the hole in the letter is smaller, it reminds me that w describes a tighter bound which is strictly smaller than its ñ bound.
This corresponds with the definition of f(n) € w (g(n)): f(n) asymptotically dominates g(n), or, more mathematically: f(n) > g(n) . k.
BigTheta:
There’s no real mnemonic for this one, but when f(n) grows just as fast as g(n) asymptotically, then f(n) € (g(n)). Should the other two mnemonics fail, then the function you’re trying to evaluate is most likely in this class.
Informally, use this when the asymptotic growth of two functions is equal.
As there’s no such thing as LittleTheta, there’s no confusion possible on which to use.

Analyse the best case, average case and worst case for linear search. (8) (Nov/Dec 2012)
We can have three cases to analyze an algorithm:
1) Worst Case
2) Average Case
3) Best Case
Let us consider the following implementation of Linear Search.

#include
// Linearly search x in arr[]. If x is present then return the index,
// otherwise return 1
int search(int arr[], int n, int x)
{
int i;
for (i=0; i
{
if (arr[i] == x)
return i;
}
return 1;
}
/* Driver program to test above functions*/
int main()
{
int arr[] = {1, 10, 30, 15};
int x = 30;
int n = sizeof(arr)/sizeof(arr[0]);
printf("%d is present at index %d", x, search(arr, n, x));
getchar();
return 0;
}

Worst Case Analysis (Usually Done)
In the worst case analysis, we calculate upper bound on running time of an algorithm. We must know the case that causes maximum number of operations to be executed. For Linear Search, the worst case happens when the element to be searched (x in the above code) is not present in the array. When x is not present, the search() functions compares it with all the elements of arr[] one by one. Therefore, the worst case time complexity of linear search would be .
Average Case Analysis (Sometimes done)
In average case analysis, we take all possible inputs and calculate computing time for all of the inputs. Sum all the calculated values and divide the sum by total number of inputs. We must know (or predict) distribution of cases. For the linear search problem, let us assume that all cases are uniformly distributed (including the case of x not being present in array). So we sum all the cases and divide the sum by (n+1). Following is the value of average case time complexity.
Average Case Time =
=
=
Best Case Analysis (Bogus)
In the best case analysis, we calculate lower bound on running time of an algorithm. We must know the case that causes minimum number of operations to be executed. In the linear search problem, the best case occurs when x is present at the first location. The number of operations in the best case is constant (not dependent on n). So time complexity in the best case would be
Unit II  BRUTE FORCE AND DIVIDEANDCONQUER
PART  A

What is the time complexity of binary search? (May/June 2012)
Compare x with the middle name in the book. If x comes before y, recursively look for x in the first half. Otherwise recursively look for x in the second half. The search is finished when the beginning and end of the list are within one.
Best case Performance – O(1)
Average Case Performance – O(log n)
Worst Case Performance – O(log n)

List down the problems that can be solved using divide and conquer approach. (Nov/Dec 2012)

What do you mean by Divide and Conquer strategy? (May/June 2013)
Given a function to compute on ‘n’ inputs the divideandcomquer strategy suggests splitting the inputs in to’k’ distinct susbsets, 1

Write control abstraction for the ordering paradigm. (May/June 2013)
The general rule is that program i is stored on tape Ti mod m .on any given tape the programs are stored in nondecreasing order of their lengths
Greedy method control abstraction for the ordering paradigm
Algorithm store(n,m)
{
J=0;
For i=1 to n do
{
Write(“append program”,I,”to permutation for tape”,j);
J=(j+1) mod m;
}
}
PART  B

Explain in detail how the divide and conquer technique can be applied to binary tree traversals. (Nov/Dec 2012) (16)

Distinguish between Quick sort and merge sort and arrange the following numbers in increasing order using merge sort (18, 29, 68, 32, 43, 37, 87, 24, 47, 50) (May/June 2013) (16)

Trace the steps in merge sort algorithm for the elements 122, 25, 70, 175, 89, 90, 95, 102, 123 and also compute its time complexity (16) (Nov/Dec 2012)

Explain the binary search algorithm with example. And explain its best, worst and average case time complexities. (8) (Nov/Dec 2012)

Explain how greedy method can be applied to solve the Knapsack problem.(16) (Nov/Dec 2012)

Discuss the algorithm for finding a minimum cost binary search trees. Explain with suitable example. (Nov/Dec 2012) (8)

Find the optimal and all feasible solution for the knapsack problem. The sack maximum capacity is 100. The item weights and corresponding profits are W=(10,20,30,40,50) and P=(20,30,66,40,60). Fill the sack such that the sack capacity should not exceed the maximum capacity and objective of the problem is to be maximize the profit. (8) (Nov/Dec 2012)

Define Greedy Algorithm and find an optimal solution to the knapsack instance n=7, m=15. (p1, p2, p3,…..p7) = (10, 5, 15, 7, 6, 18, 3) and (w1,w2,w3,….w7) = (2, 3, 5, 7, 1, 4, 1) (May/June 2013) (16)
Unit III  DYNAMIC PROGRAMMING AND GREEDY TECHNIQUE
PART – A

What is principle of optimality? (Nov/Dec 2012)

Define multistage graphs. (May/June 2013)

What is the drawback of greedy algorithm? (May/June 2012)

What is knapsack problem? (May/June 2013)

What is dynamic programming? (May/June 2012)

State principle of optimality. (May/June 2012)

Differentiate greedy method and dynamic programming. (May/June 2012)
PART  B

Apply backtracking to solve the following instance of subset sum problem. S = {1, 3, 4, 5}; d = {11} (Nov/Dec 2012) (16)

Explain the multistage graph problem with an example. (May/June 2013) (8)

Explain multistage graph. Find the minimum cost path from source to destination for the given graph using both forward and backward approach

Describe all pairs shortest path problem and write procedure to compute lengths of shortest paths. (May/June 2013) (16)

With a suitable example, explain all – pair shortest paths problem.(16) (May/June 2012)
Unit V  COPING WITH THE LIMITATIONS OF ALGORITHM
PART – A

Compare backtracking and branch and bound techniques. (Nov/Dec 2012)

What is Hamiltonian cycle? (May/June 2013)

Define sum of subset problem. (May/June 2013)

Define promising and non promising nodes in a state space tree. (May/June 2012)

State Knapsack problem. (May/June 2012)

What is the difference between explicit and implicit constraints? (May/June 2012)

Define the basic principles of back tracking. (May/June 2012)

Explain the control abstraction for Backtracking method. How 8 Queens problem could be solved using backtracking method? Explain. (Nov/Dec 2012) (8)

An NPhard problem can be solved in deterministic polynomial time, how? (May/June 2013)

What is an articulation point in a graph? (May/June 2012)

What is the difference between BFS and DFS methods? (May/June 2012)
PART  B

Differentiate depth first search and breadth first search. (Nov/Dec 2012)

For the following graph identify and explain the articulation points and draw the biconnected components (May/June 2013) (16)

Write a complete LC branch and bound algorithm for the job sequencing with deadlines problem. Use the fixed tuple size formulation. (May/June 2013) (16)

How backtracking works on the 8 Queens problem with suitable example? (May/June 2013) (16)

Write a backtracking program for solving the knapsack optimization problem. (May/June 2013) (8)

Explain elaborately recursive backtracking algorithm. (May/June 2013) (8)

Explain Hamilton Cycles. (Nov/Dec 2012) (8)

Discuss the following with example (May/June 2012)

n queen’s problem (8)

Hamiltonian circuit problem (8)

Write down and explain the procedure for tackling the 8 – queens problem using a backtracking approach. (16) (May/June 2012)

Let w = {5,7,10,12,15,18,20} and m=35. Find all possible subset of w whose sum is equivalent to m. Draw the portion of state space tree for this problem. (Nov/Dec 2012) (8)

Discuss in detail about the biconnected components of a graph. (16) (May/June 2012)

Compare and contrast FIFO and LC branch – and – bound search techniques. (16) (May/June 2012)

Give an algorithm to identify articulation points and to construct biconnected components Explain with an example. (Nov/Dec 2012) (8)

Compare and contrast LCBB and FIFO BB. (Nov/Dec 2012) (8)

Let n=4 and m=15 the profits for the instances are (p1, p2, p3, p4, p5) = (10, 10, 12, 18) and the weights are (w1, w2, w3, w4, w5) = (2,4,6,9). Explain the working of 0/1 Knapsack problem using LC branch and bound technique. (Nov/Dec 2012) (8)

Solve the travelling salesperson problem for the following graph using branchandbound algorithm. (Nov/Dec 2012) (16)

SNSCT – Department of Compute Science and Engineering Page
Share with your friends: 