 In Progress
Lesson 4, Topic 3
In Progress

# Asymptotic Notations

Lesson Progress
0% Complete

Asymptotic Notations are languages that allow us to analyze an algorithm’s running time by identifying its behavior as the input size for the algorithm increases. This is also known as an algorithm’s growth rate

Let us imagine an algorithm as a function f, n as the input size, and f(n) being the running time. So for a given algorithm f, with input size n you get some resultant run time f(n). This results in a graph where the Y axis is the runtime, X axis is the input size, and plot points are the resultants of the amount of time for a given input size.

You can label a function, or algorithm, with an Asymptotic Notation in many different ways. Some examples are, you can describe an algorithm by its best case, worse case, or equivalent case. The most common is to analyze an algorithm by its worst case. You typically don’t evaluate by best case because those conditions aren’t what you’re planning for.

### Types of functions, limits, and simplification

``````1 Logarithmic Function - log n
2 Linear Function - an + b
3 Quadratic Function - an^2 + bn + c
4 Polynomial Function - an^z + . . . + an^2 + a*n^1 + a*n^0, where z is some constant
5 Exponential Function - a^n, where a is some constant``````

These are some basic function growth classifications used in various notations. The list starts at the slowest growing function (logarithmic, fastest execution time) and goes on to the fastest growing (exponential, slowest execution time). Notice that as ‘n’, or the input, increases in each of those functions, the result clearly increases much quicker in quadratic, polynomial, and exponential, compared to logarithmic and linear.

One extremely important note is that for the notations about to be discussed you should do your best to use simplest terms. This means to disregard constants, and lower order terms, because as the input size (or n in our f(n) example) increases to infinity (mathematical limits), the lower order terms and constants are of little to no importance. That being said, if you have constants that are 2^9001, or some other ridiculous, unimaginable amount, realize that simplifying will skew your notation accuracy.

since we want simplest form, lets modify our table a bit…

``````1 Logarithmic - log n
2 Linear - n
4 Polynomial - n^z, where z is some constant
5 Exponential - a^n, where a is some constant``````

### I) Big-O

Big-O, commonly written as O, is an Asymptotic Notation for the worst case, or ceiling of growth for a given function. It provides us with an asymptotic upper bound for the growth rate of runtime of an algorithm. Say `f(n)` is your algorithm runtime, and `g(n)` is an arbitrary time complexity you are trying to relate to your algorithm. `f(n)` is O(g(n)), if for some real constants c (c > 0) and n0`f(n)` <= `c g(n)` for every input size n (n > n0).

### Example 1:

``````f(n) = 3log n + 100
g(n) = log n``````

Is `f(n)` O(g(n))? Is `3 log n + 100` O(log n)? Let’s look to the definition of Big-O

``3log n + 100 <= c * log n``

Is there some pair of constants c, n0 that satisfies this for all n > 0?

``3log n + 100 <= 150 * log n, n > 2 (undefined at n = 1)``

The definition of Big-O has been met therefore `f(n)` is O(g(n)).

### Example 2:

``````f(n) = 3*n^2
g(n) = n``````

Is `f(n)` O(g(n))? Is `3 * n^2` O(n)? Let’s look at the definition of Big-O.

``3 * n^2 <= c * n``

Is there some pair of constants c, n0 that satisfies this for all n > 0? No, there isn’t. `f(n)` is NOT O(g(n)).

### II) Big-Omega

Big-Omega, commonly written as Ω, is an Asymptotic Notation for the best case, or a floor growth rate for a given function. It provides us with an asymptotic lower bound for the growth rate of runtime of an algorithm.

`f(n)` is Ω(g(n)), if for some real constants c (c > 0) and n0 (n0 > 0), `f(n)` is >= `c g(n)` for every input size n (n > n0).

### Note

The asymptotic growth rates provided by big-O and big-omega notation may or may not be asymptotically tight. Thus we use small-o and small-omega notation to denote bounds that are not asymptotically tight.

### III)Small-o

Small-o, commonly written as o, is an Asymptotic Notation to denote the upper bound (that is not asymptotically tight) on the growth rate of runtime of an algorithm.

`f(n)` is o(g(n)), if for all real constants c (c > 0) and n0 (n0 > 0), `f(n)` is < `c g(n)` for every input size n (n > n0).

The definitions of O-notation and o-notation are similar. The main difference is that in f(n) = O(g(n)), the bound f(n) <= g(n) holds for some constant c > 0, but in f(n) = o(g(n)), the bound f(n) < c g(n) holds for all constants c > 0.

### IV)Small-omega

Small-omega, commonly written as ω, is an Asymptotic Notation to denote the lower bound (that is not asymptotically tight) on the growth rate of runtime of an algorithm.

``f(n) is ω(g(n)), if for all real constants c (c > 0) and n0 (n0 > 0), f(n) is > c g(n) for every input size n (n > n0).``

The definitions of Ω-notation and ω-notation are similar. The main difference is that in f(n) = Ω(g(n)), the bound f(n) >= g(n) holds for some constant c > 0, but in f(n) = ω(g(n)), the bound f(n) > c g(n) holds for all constants c > 0.

### V)Theta( Θ )

Theta, commonly written as Θ, is an Asymptotic Notation to denote the asymptotically tight bound on the growth rate of runtime of an algorithm.

``f(n) is Θ(g(n)), if for some real constants c1, c2 and n0 (c1 > 0, c2 > 0, n0 > 0), c1 g(n) is < f(n) is < c2 g(n) for every input size n (n > n0).``
``∴ f(n) is Θ(g(n)) implies f(n) is O(g(n)) as well as f(n) is Ω(g(n)).``