An algorithm is a clearly specified set of simple instructions to be followed to solve a problem.
2.1 Mathematical Background
Definitions:
T(N) = O(f(N)) if there are positive constants c and n0 such that T(N) ≤ cf(N) when N ≥ n0.Big-Oh notation
T(N) = Ω(g(N))if there are positive constants c and n0 such that T(N) ≥ cg(N) when N ≥ n0.
T(N) = Θ(b(N)) if and only if T(N) = O(b(N)) and T(N) = Ω(b(N))
T(N) = o(p(N)) if T(N) = O(p(N)) and T(N) ≠ Θ(p(N)) little-oh notation
If T(N) = O(f(N)), thus f(N) is the upper bound on T(N), and T(N) is a lower bound on f(N).
Rule 1
if T1(N) = O(f(N)) and T2(N) = O(g(N)), then
a. T1(N) + T2(N) = max(O(f(N)), O(g(N))),
b. T1(N) * T2(N) = O(f(N) * g(N)),
Rule 2
If T(N) is a polynomial of degree k, then T(N) = Θ(N^k).
Rule 3
log^kN = O(N) for any constant k. This tells us that logarithms grow very slowly. (log^kN = log(N^k) = klogN).
Big-Oh notation: http://en.wikipedia.org/wiki/Big_O_notation
We can always determine the relative growth rates of two functions by l'hopital's rule: http://en.wikipedia.org/wiki/L%27H%C3%B4pital%27s_rule
Typically, the size of the input is the main consideration. We define two functions, Tavg(N) and Tworst(N), as the average and worst-case running time. Generally, the quantity required is the worst-case time.
2.4 Running Time Calculations
General Rules
Rule 1 -- for Loops:
The running time of a for loop is at most the running time of the statements inside the for loop times the number of iterations.
Rule 2 -- nested for loops:
Analyze these inside out. The total running time of a statement inside a group of nested loops is the running time of the statement multiplied by the product of the sizes of all the for loops.
Rule 3 -- consecutive statements
These just add (which means that the maximum is the one that counts)
Rule 4 -- if/else
For the following fragment:
if( Condition )
s1
else
s2
The running time of an if/else is never more than the running time of the test plus the largest of the running times of s1 and s2.
Abuse of recursion P 23
2.4.4 Logarithms in the Running Time
An algorithm is O(logN) if it takes contant (O(1)) time t cut the problem size by a fraction (which is usually 1/2).
If constant time is required to merely reduce the problem by a constant amount, then the algorithm is O(N).
One way to verify that some program is O(f(N)) is to compute the values T(N)/f(N) for a range of N.Where T(N) is the empirically observed running time. If f(N) is a tight answer for the running time, then the computed values converge to a positive constant. If f(N) is an overestimate, the values converge to zero. If f(N) is an underestimate and hence wrong, the value diverge.