Back to Course

Algorithm

0% Complete
0/82 Steps
  1. Getting Started with Algorithm
    What is an Algorithm?
  2. Characteristics of Algorithm
    1 Topic
  3. Analysis Framework
  4. Performance Analysis
    3 Topics
  5. Mathematical Analysis
    2 Topics
  6. Sorting Algorithm
    Sorting Algorithm
    10 Topics
  7. Searching Algorithm
    6 Topics
  8. Fundamental of Data Structures
    Stacks
  9. Queues
  10. Graphs
  11. Trees
  12. Sets
  13. Dictionaries
  14. Divide and Conquer
    General Method
  15. Binary Search
  16. Recurrence Equation for Divide and Conquer
  17. Finding the Maximum and Minimum
  18. Merge Sort
  19. Quick Sort
  20. Stassen’s Matrix Multiplication
  21. Advantages and Disadvantages of Divide and Conquer
  22. Decrease and Conquer
    Insertion Sort
  23. Topological Sort
  24. Greedy Method
    General Method
  25. Coin Change Problem
  26. Knapsack Problem
  27. Job Sequencing with Deadlines
  28. Minimum Cost Spanning Trees
    2 Topics
  29. Single Source Shortest Paths
    1 Topic
  30. Optimal Tree Problem
    1 Topic
  31. Transform and Conquer Approach
    1 Topic
  32. Dynamic Programming
    General Method with Examples
  33. Multistage Graphs
  34. Transitive Closure
    1 Topic
  35. All Pairs Shortest Paths
    6 Topics
  36. Backtracking
    General Method
  37. N-Queens Problem
  38. Sum of Subsets problem
  39. Graph Coloring
  40. Hamiltonian Cycles
  41. Branch and Bound
    2 Topics
  42. 0/1 Knapsack problem
    2 Topics
  43. NP-Complete and NP-Hard Problems
    1 Topic
Lesson 4, Topic 3
In Progress

Asymptotic Notations

Lesson Progress
0% Complete

Asymptotic Notations are languages that allow us to analyze an algorithm’s running time by identifying its behavior as the input size for the algorithm increases. This is also known as an algorithm’s growth rate

Let us imagine an algorithm as a function f, n as the input size, and f(n) being the running time. So for a given algorithm f, with input size n you get some resultant run time f(n). This results in a graph where the Y axis is the runtime, X axis is the input size, and plot points are the resultants of the amount of time for a given input size.

You can label a function, or algorithm, with an Asymptotic Notation in many different ways. Some examples are, you can describe an algorithm by its best case, worse case, or equivalent case. The most common is to analyze an algorithm by its worst case. You typically don’t evaluate by best case because those conditions aren’t what you’re planning for.

Types of functions, limits, and simplification

1 Logarithmic Function - log n
2 Linear Function - an + b
3 Quadratic Function - an^2 + bn + c
4 Polynomial Function - an^z + . . . + an^2 + a*n^1 + a*n^0, where z is some constant
5 Exponential Function - a^n, where a is some constant

These are some basic function growth classifications used in various notations. The list starts at the slowest growing function (logarithmic, fastest execution time) and goes on to the fastest growing (exponential, slowest execution time). Notice that as ‘n’, or the input, increases in each of those functions, the result clearly increases much quicker in quadratic, polynomial, and exponential, compared to logarithmic and linear.

One extremely important note is that for the notations about to be discussed you should do your best to use simplest terms. This means to disregard constants, and lower order terms, because as the input size (or n in our f(n) example) increases to infinity (mathematical limits), the lower order terms and constants are of little to no importance. That being said, if you have constants that are 2^9001, or some other ridiculous, unimaginable amount, realize that simplifying will skew your notation accuracy.

since we want simplest form, lets modify our table a bit…

1 Logarithmic - log n
2 Linear - n
3 Quadratic - n^2
4 Polynomial - n^z, where z is some constant
5 Exponential - a^n, where a is some constant

I) Big-O

Big-O, commonly written as O, is an Asymptotic Notation for the worst case, or ceiling of growth for a given function. It provides us with an asymptotic upper bound for the growth rate of runtime of an algorithm. Say f(n) is your algorithm runtime, and g(n) is an arbitrary time complexity you are trying to relate to your algorithm. f(n) is O(g(n)), if for some real constants c (c > 0) and n0f(n) <= c g(n) for every input size n (n > n0).

Example 1:

f(n) = 3log n + 100
g(n) = log n

Is f(n) O(g(n))? Is 3 log n + 100 O(log n)? Let’s look to the definition of Big-O

3log n + 100 <= c * log n

Is there some pair of constants c, n0 that satisfies this for all n > 0?

3log n + 100 <= 150 * log n, n > 2 (undefined at n = 1)

The definition of Big-O has been met therefore f(n) is O(g(n)).

Example 2:

f(n) = 3*n^2
g(n) = n

Is f(n) O(g(n))? Is 3 * n^2 O(n)? Let’s look at the definition of Big-O.

3 * n^2 <= c * n

Is there some pair of constants c, n0 that satisfies this for all n > 0? No, there isn’t. f(n) is NOT O(g(n)).

II) Big-Omega

Big-Omega, commonly written as Ω, is an Asymptotic Notation for the best case, or a floor growth rate for a given function. It provides us with an asymptotic lower bound for the growth rate of runtime of an algorithm.

f(n) is Ω(g(n)), if for some real constants c (c > 0) and n0 (n0 > 0), f(n) is >= c g(n) for every input size n (n > n0).

Note

The asymptotic growth rates provided by big-O and big-omega notation may or may not be asymptotically tight. Thus we use small-o and small-omega notation to denote bounds that are not asymptotically tight.

III)Small-o

Small-o, commonly written as o, is an Asymptotic Notation to denote the upper bound (that is not asymptotically tight) on the growth rate of runtime of an algorithm.

f(n) is o(g(n)), if for all real constants c (c > 0) and n0 (n0 > 0), f(n) is < c g(n) for every input size n (n > n0).

The definitions of O-notation and o-notation are similar. The main difference is that in f(n) = O(g(n)), the bound f(n) <= g(n) holds for some constant c > 0, but in f(n) = o(g(n)), the bound f(n) < c g(n) holds for all constants c > 0.

IV)Small-omega

Small-omega, commonly written as ω, is an Asymptotic Notation to denote the lower bound (that is not asymptotically tight) on the growth rate of runtime of an algorithm.

f(n) is ω(g(n)), if for all real constants c (c > 0) and n0 (n0 > 0), f(n) is > c g(n) for every input size n (n > n0).

The definitions of Ω-notation and ω-notation are similar. The main difference is that in f(n) = Ω(g(n)), the bound f(n) >= g(n) holds for some constant c > 0, but in f(n) = ω(g(n)), the bound f(n) > c g(n) holds for all constants c > 0.

V)Theta( Θ )

Theta, commonly written as Θ, is an Asymptotic Notation to denote the asymptotically tight bound on the growth rate of runtime of an algorithm.

f(n) is Θ(g(n)), if for some real constants c1, c2 and n0 (c1 > 0, c2 > 0, n0 > 0), c1 g(n) is < f(n) is < c2 g(n) for every input size n (n > n0).
∴ f(n) is Θ(g(n)) implies f(n) is O(g(n)) as well as f(n) is Ω(g(n)).

Examples

KodNest Capture18
Table 3: Asymptotic complexity of Sum (Algorithm 2)
KodNest Capture19
Table 4: Asymptotic complexity of RSum (Algorithm 3).
KodNest Training New Batch is starting on 02nd November 2020. Attend one week free demo classes.Register Now

New Report

Close