University questions



Download 0.58 Mb.
Page3/4
Date28.05.2018
Size0.58 Mb.
#51062
1   2   3   4


The function f(n) =q (g(n)) iff there exist positive constant C1, C2, and no such that C1 g(n) <= f(n) <= C2 g(n) for all n, n ³ n0.




  1. Differentiate the mathematical analysis of recursive and non-recursive algorithms. (Nov/Dec 2012) (16)

Nonrecursive Algorithms:

  • Steps in mathematical analysis of nonrecursive algorithms:

  • Decide on parameter n indicating input size

  • Identify algorithm’s basic operation

  • Determine worst, average, and best case for input of size n

  • Set up summation for C(n) reflecting algorithm’s loop structure

  • Simplify summation using standard formulas (see Appendix A)

Example:

Selection sort

Input: An array A[0..n-1]
Output: Array A[0..n-1] sorted in ascending order
for i ← 0 to n-2 do

min ← i


for j = i + 1 to n – 1 do

if A[j] < A[min]

min ← j

swap A[i] and A[min]



basic operation: comparison

Inner loop:

n-1

S(i) =∑1 = (n-1) – (i + 1) + 1 = n – 1 - i



j = i+1
Outer loop:

n-2 n-2 n-2 n-2

Op = ∑ S(i) = ∑ (n – 1 – i) = ∑ (n – 1) - ∑ i

i = 0 i = 0 i = 0 i = 0


Basic formula:

n

∑ i = n(n+1) / 2



i = 0

Op = (n – 1 )(n -1 ) – (n-2)(n-1)/2 = (n – 1) [2(n – 1) – (n – 2)] / 2 = = (n – 1) n / 2 =

O(n2)

Recursive Algorithms:

Steps in mathematical analysis of recursive algorithms:


  • Decide on parameter n indicating input size

  • Identify algorithm’s basic operation

  • Determine worst, average, and best case for input of size n

  • Set up a recurrence relation and initial condition(s) for C(n)-the number of times the basic operation will be executed for an input of size n (alternatively count recursive calls)

  • Solve the recurrence to obtain a closed form or estimate the order of magnitude of the solution (see Appendix B)

Important recurrence types:

Linear: One (constant) operation reduces problem size by one.

T(n) = T(n-1) + c


T(1) = d
Solution: T(n) = (n-1)c + d

Quadratic: A pass through input reduces problem size by one.

T(n) = T(n-1) + cn
T(1) = d
Solution: T(n) = [n(n+1)/2 – 1] c + d

Logarithmic: One (constant) operation reduces problem size by half.

T(n) = T(n/2) + c
T(1) = d
Solution: T(n) = c log n + d

n log n: A pass through input reduces problem size by half.

T(n) = 2T(n/2) + cn
T(1) = d
Solution: T(n) = cn log n + d n
Example 1

n! = n*(n-1)!


0! = 1

T(n) = T(n-1) + 1


T(1) = 1

Telescoping:

T(n) = T(n-1) + 1
T(n-1) = T(n-2) + 1
T(n-2) = T(n-3) + 1

T(2) = T(1 ) + 1

Add the equations and cross equal terms on opposite sides:

T(n) = T(1) + (n-1) = n
Example 2: Binary search

T(n) = T(n/2) + 1


T(n/2) = T(n/4) + 1

T(2) = T(1) + 1

Add the equations and cross equal terms on opposite sides:

T(n) = T(1) + log(n) = O(log(n))

Master Theorem: A general divide-and-conquer recurrence

T(n) = aT(n/b) + f (n), where f (n) Î Θ(nk)

a < bk : T(n) Î Θ(nk)

a = bk : T(n) Î Θ(nk log n )

a > bk : T(n) Î Θ(n (log b a))

Note: the same results hold with O instead of Θ.


  1. Briefly explain the time complexity, space complexity estimation. (May/June 2013) (6)

Time and Space Complexity:

Goals:


This laboratory exercise introduces some principles of algorithm effectiveness, including the amount of time and memory required for the algorithm. Big-O notation is introduced to provide an informal measure of the time or space required by an algorithm. These ideas are applied to the linear and binary search algorithms, discussed in the lab on searching.

In considering the solution to a problem, it is natural to ask how effective that solution might be. Also, when comparing solutions, one might wonder if one solution were better than another. Altogether, one might use many criteria to evaluate such solutions, including:

Accuracy:

Of course, any program should produce correct answers. (If we were satisfied with wrong results, it is trivial to produce many such answers very quickly.) However, it may not be immediately clear just how accurate results should be in a specific instance. For example, one algorithm may be simple to program and may run quickly, but it only may be accurate to 5 decimal places. A second algorithm may be more complex and much slower, but may give 15 place accuracy. If 5-place accuracy is adequate for a specific application, the first algorithm is the better choice. However, if 10 or 12 place accuracy is required, the slower algorithm must be used.


Efficiency:

Efficiency can be measured in many ways: programmer time, algorithm execution time, memory used, and so on. If a program is to be used once, then programmer time may be a major consideration, and a simple algorithm might be preferred. If a program is to be used many times, however, then it may be worth spending more development time with a complex algorithm, so the procedure will run very quickly.


Use of Memory:

One algorithm may require more computer memory in which to execute. If space is a scarce resource, then the amount of space an algorithm requires should be taken into consideration when comparing algorithms.


Ease of Modification:

It is common practice to modify old programs to solve new problems. A very obscure algorithm that is difficult to understand, therefore. is usually less desirable than one which can be easily read and modified.



For this laboratory exercise, we focus on algorithm execution time.


  1. Write the linear search algorithm and analyze its time complexity. (May/June 2013) (10)

For successful search

Best case O(1)

Worst Case O(n)

Average case O(n)

For Unsuccessful search

Best case O(n)

Worst Case O(n)

Average case O(n)

Solve the following recurrence realtions (Nov/Dec 2012) (4+4)

T(n)=

T(n)=

Where a and c are constants

O(logn)


O(nlogn)


  1. Distinguish between Big Oh, Theta and Omega notations (8) (Nov/Dec 2012)

The difference between Big Oh, Big Omega and Big Theta.

This is written not from the mathematical point of view, but the information technology point of view, so there won’t be any mathematical things in this article. I also will not handle complexity in greater detail than necessary.

Algorithm complexity studies the limiting behaviour of algorithms as the input n approaches infinity. There are three main complexity classes in which algorithms can be placed: Oh, Omega and Theta. Two of them, Oh and Omega can be divided in subclasses: Big-Oh and Little-Oh and Big-Omega and Little-Omega.

This article describes an easy but useful mnemonic that can be used to differentiate between the three main classes.
Big-Oh and Little-Oh:

This one helps if you know that the letter is not actually a capital ‘o’, but the Greek capital Omicron. The Omicron looks deceptively much like the capital ‘o’: O. The mnenomic lies in the name of the Greek letter.

To access the mnemonic, Omicron should be read as O-micron. Extended to a sentence, you get: ‘… is micro compared to … as n approaches infinity’.

Let’s take, for example, the following notation: f(n) €  (g(n))

Using the mnenomic, the following can be read: “The function f is micro compared to g as n approaches infinity.” This means that, as n approaches infinity, f is asymptotically bounded above by g(up to a constant factor).
Or: f(n) € g(n)  k as n approaches infinity.

Be mindful that this means that f(n) could very well equal g(n) for some constant factor as n approaches infinity.

If the two should not be equal, use the tighter o (Little-Oh). I use the same mnemonic for this one as for the Big-Oh, but as the hole in the letter is tighter, it reminds me that o describes a tighter bound which is strictly smaller than its bound.
This corresponds with the definition of f(n) € (g(n)): g(n) asymptotically dominates f(n), or, more mathematically: f(n) < g(n) . k.
Big-Omega and Little-Omega:

This mnemonic is a bit easier since the Greek letter is already used. To access it, read the Omega as O-mega. Extended to a sentence form, you can read ‘… is mega compared to … as n approaches infinity.’

Let’s take, for example, the following notation:

f(n) € (g(n))


Using the mnemonic, the following can be read: “The function f is mega compared to g as n approaches infinity.” This means that, as n approaches infinity, f is asymptotically bounded below by g(up to a constant factor).
Or: f(n) € g(n)  k as n approaches infinity.

Be mindful that this means that f(n) could very well equal g(n) for some constant factor as n approaches infinity.

If the two should not be equal, use the tighter (Little-Omega). I use the same mnemonic for this one as for the Big-Omega, but as the hole in the letter is smaller, it reminds me that w describes a tighter bound which is strictly smaller than its ñ bound.

This corresponds with the definition of f(n) € w (g(n)): f(n) asymptotically dominates g(n), or, more mathematically: f(n) > g(n) . k.


Big-Theta:

There’s no real mnemonic for this one, but when f(n) grows just as fast as g(n) asymptotically, then f(n) € (g(n)). Should the other two mnemonics fail, then the function you’re trying to evaluate is most likely in this class.


Informally, use this when the asymptotic growth of two functions is equal.

As there’s no such thing as Little-Theta, there’s no confusion possible on which to use.




  1. Analyse the best case, average case and worst case for linear search. (8) (Nov/Dec 2012)

We can have three cases to analyze an algorithm:
1) Worst Case
2) Average Case
3) Best Case

Let us consider the following implementation of Linear Search.



#include

// Linearly search x in arr[].  If x is present then return the index,

// otherwise return -1

int search(int arr[], int n, int x)

{

int i;


for (i=0; i

{

if (arr[i] == x)



return i;

}

return -1;



}
/* Driver program to test above functions*/

int main()

{

int arr[] = {1, 10, 30, 15};



int x = 30;

int n = sizeof(arr)/sizeof(arr[0]);

printf("%d is present at index %d", x, search(arr, n, x));
getchar();

return 0;

}





Download 0.58 Mb.

Share with your friends:
1   2   3   4




The database is protected by copyright ©ininet.org 2024
send message

    Main page