Algorithm is a step by step procedure, which defines a set of instructions to be executed in certain order to get the desired output.An algorithm is a set of instructions designed to perform a specific task. This can be a simple process, such as multiplying two numbers, or a complex operation, such as playing a compressed video file. Search engines use proprietary algorithms to display the most relevant results from their search index for specific queries.

An algorithm analysis is a technique that is used to measure the performance of the algorithms. Speed is one of the key parameters in determining the potential of an algorithm. There are some other factors like user-friendliness, security, maintainability, and usage space that determine the quality of an algorithm. Space and time complexity are metrics used to measure parameters.

An algorithm are generally analyzed on two factors − time and space. That is, how much execution time and how much extra space required by the algorithm.

It provides a comparison of the performances of various algorithms.

For large values of components/inputs, the multiplicative constants and lower order terms of an exact running time are dominated by the effects of the input size (the number of components).

**Asymptotic Notations** are the expressions that are used to represent the complexity of an algorithm.

There are three **types of analysis** that we perform on a particular algorithm.

**Best Case**: In which we analyse the performance of an algorithm for the input, for which the algorithm takes less time or space.

**Worst Case**: In which we analyse the performance of an algorithm for the input, for which the algorithm takes long time or space.

**Average Case**: In which we analyse the performance of an algorithm for the input, for which the algorithm takes time or space that lies between best and worst case.

**Greedy Approach −**finding solution by choosing next best option.

**Divide and Conquer −**diving the problem to a minimum possible sub-problem and solving them independently.

**Dynamic Programming −**diving the problem to a minimum possible sub-problem and solving them combined.

**The below given problems find their solution using greedy algorithm approach:**

- Travelling Salesman Problem.
- Prim’s Minimal Spanning Tree Algorithm.
- Kruskal’s Minimal Spanning Tree Algorithm.
- Dijkstra’s Minimal Spanning Tree Algorithm.
- Graph – Map Coloring.
- Graph – Vertex Cover.
- Knapsack Problem.
- Job Scheduling Problem.

**The below given problems find their solution using divide and conquer algorithm approach:**

- Merge Sort
- Quick Sort
- Binary Search
- Strassen’s Matrix Multiplication
- Closest pair (points)

**The below given problems find their solution using divide and conquer algorithm approach:**

- Fibonacci number series
- Knapsack problem
- Tower of Hanoi
- All pair shortest path by Floyd-Warshall
- Shortest path by Dijkstra
- Project scheduling

Linear search tries to find an item in a sequentially arranged data type. These sequentially arranged data items known as array or list, are accessible in incrementing memory location. Linear search compares expected data item with each of data items in list or array. The average case time complexity of linear search is Ο (n) and worst case complexity is Ο (n2). Data in target arrays/lists need not to be sorted.

Binary search is a fast search algorithm with run-time complexity of Ο(log n). This search algorithm works on the principle of divide and conquer. For this algorithm to work properly, the data collection should be in the sorted form.

Binary search looks for a particular item by comparing the middle most item of the collection. If a match occurs, then the index of item is returned. If the middle item is greater than the item, then the item is searched in the sub-array to the left of the middle item. Otherwise, the item is searched for in the sub-array to the right of the middle item. This process continues on the sub-array as well until the size of the subarray reduces to zero.

This search first compares the target value to the mid of the list. If it is not found, then it takes decision on whether.

Bubble sort is an algorithm that compares the adjacent elements and swaps their positions if they are not in the intended order. The order can be ascending or descending.

Insertion sort divides the list into two sub-list, sorted and unsorted. It takes one element at time and finds it appropriate location in sorted sub-list and insert there. The output after insertion is a sorted sub-list. It iteratively works on all the elements of unsorted sub-list and inserts them to sorted sub-list in order.

Selection sort is in-place sorting technique. It divides the data set into two sub-lists: sorted and unsorted. Then it selects the minimum element from unsorted sub-list and places it into the sorted list. This iterates unless all the elements from unsorted sub-list are consumed into sorted sub-list.

In simple terms,selection sort is an algorithm that selects the smallest element from an unsorted list in each iteration and places that element at the beginning of the unsorted list.

Both sorting techniques maintains two sub-lists, sorted and unsorted and both take one element at a time and places it into sorted sub-list. Insertion sort works on the current element in hand and places it in the sorted array at appropriate location maintaining the properties of insertion sort. Whereas, selection sort searches the minimum from the unsorted sub-list and replaces it with the current element in hand.

Merge sort is sorting algorithm based on divide and conquer programming approach. It keeps on dividing the list into smaller sub-list until all sub-list has only 1 element. And then it merges them in a sorted way until all sub-lists are consumed. It has run-time complexity of Ο (n log n) and it needs Ο (n) auxiliary space.

Shell sort is an algorithm that first sorts the elements far apart from each other and successively reduces the interval between the elements to be sorted. It is a generalized version of insertion sort.

In shell sort, elements at a specific interval are sorted. The interval between the elements is gradually decreased based on the sequence used. the performance of the shell sort depends on the type of sequence used for a given input array.

Some of the optimal sequences used are:

- Shell’s original sequence: N/2 , N/4 , …, 1
- Knuth’s increments: 1, 4, 13, …, (3k – 1) / 2
- Sedgewick’s increments: 1, 8, 23, 77, 281, 1073, 4193, 16577…4j+1+ 3·2j+ 1.
- Hibbard’s increments: 1, 3, 7, 15, 31, 63, 127, 255, 511…
- Papernov & Stasevich increment: 1, 3, 5, 9, 17, 33, 65,…
- Pratt: 1, 2, 3, 4, 6, 9, 8, 12, 18, 27, 16, 24, 36, 54, 81….

Quick sort uses divide and conquer approach. It divides the list in smaller ‘partitions’ using ‘pivot’. The values which are smaller than the pivot are arranged in the left partition and greater values are arranged in the right partition. Each partition is recursively sorted using quick sort.

A graph is a pictorial representation of a set of objects where some pairs of objects are connected by links. The interconnected objects are represented by points termed as vertices, and the links that connect the vertices are called edges.

DFS algorithm is used to traverse the graph in such a way that it tries to go far from the root node. Stack is used in the implementation of the depth first search. Let’s see how depth first search works with respect to the following graph:

As stated before, in DFS, nodes are visited by going through the depth of the tree from the starting node. If we do the depth first traversal of the above graph and print the visited node, it will be “A B E F C D”. DFS visits the root node and then its children nodes until it reaches the end node, i.e. E and F nodes, then moves up to the parent nodes.

#### Algorithmic Steps

**Step 1**: Push the root node in the Stack.**Step 2**: Loop until stack is empty.**Step 3**: Peek the node of the stack.**Step 4**: If the node has unvisited child nodes, get the unvisited child node, mark it as traversed and push it on stack.**Step 5**: If the node does not have any unvisited child nodes, pop the node from the stack.

BFS algorithm is used to traverse the graph as close as possible to the root node. Queue is used in the implementation of the breadth first search. Let’s see how BFS traversal works with respect to the following graph:

If we do the breadth first traversal of the above graph and print the visited node as the output, it will print the following output. “A B C D E F”. The BFS visits the nodes level by level, so it will start with level 0 which is the root node, and then it moves to the next levels which are B, C and D, then the last levels which are E and F.

#### Algorithmic Steps

**Step 1**: Push the root node in the Queue.**Step 2**: Loop until the queue is empty.**Step 3**: Remove the node from the Queue.**Step 4**: If the removed node has unvisited child nodes, mark them as visited and insert the unvisited children in the queue.

A tree is a minimally connected graph having no loops and circuits.

A binary tree has a special condition that each node can have two children at maximum.

The Tower of Hanoi is a famous problem which was posed by a French mathematician in 1883.

What you need to do is move all the disks from the left hand post to the right hand post. You can only move the disks one at a time and you can never place a bigger disk on a smaller disk.

The aim is to try and complete the transfer using the smallest number of moves possible. For example if you have three disks, the minimum number of moves is 7. If you have four disks, the minimum number of moves is 15.

Recursion is the name given to the technique of defining a set or a process in terms of itself. There are essentially two types of recursion. The first type concerns recursively defined function and the second type of recursion is the recursive use of a procedure.

In the algorithmic notation, a string is expressed as any sequence of characters enclosed in single quote marks.

Find the no. of elements on the left side.

If it is n-1 the root is the median.

If it is more than n-1, then it has already been found in the left sub tree.

Else it should be in the right sub tree.

The general strategy in a Markov Algorithm is to take as input a string’ a’ and, through a number of steps in the algorithm, transform a to an output string b. this transformation process is generally performed in computers for text editing or program compilation.

Diffie-Hellman key exchange, also called exponential key exchange, is a method of digital encryption that uses numbers raised to specific powers to produce decryption keys on the basis of components that are never directly transmitted, making the task of a would-be code breaker mathematically overwhelming.

To implement Diffie-Hellman, the two end users Alice and Bob, while communicating over a channel they know to be private, mutually agree on positive whole numbers *p* and *q*, such that *p* is a prime number and *q* is a generator of *p*. The generator *q* is a number that, when raised to positive whole-number powers less than *p*, never produces the same result for any two such whole numbers. The value of *p* may be large but the value of *q* is usually small.

The goal is completely fill the distance array so that for each vertex v, the value of distance[v] is the weight of the shortest path from start to v.

Backtracking is one of the good algorithms , in which we tries to find out the solution to a given problem, by moving in various directions(or paths) and rejecting those directions(or paths) which we find that it wouldn’t give us the required solution.

For example(the simplest one), suppose the problem statement is to find three numbers whose sum of is 10 among these given numbers ( 3,2,7,5,6,8). we started with 3+2+7=12 which is greater than 10. So we backtrack (means we move one step back), now added 3+2+5 = 10. there we go and in this way found the required solution.

So backtracking means tracking (or moving) back in order to find other path to get the required solution.

Best first search uses the concept of a Priority queue and heuristic search. To search the graph space, the best first search method uses two lists for tracking the traversal. An ‘Open’ list which keeps track of the current ‘immediate’ nodes available for traversal and ‘CLOSED’ list that keeps track of the nodes already traversed.

A brute force algorithm is a type of algorithm that proceeds in a simple and easy way, but requires a huge number of steps to complete. As an example, if you want to find out the factorial of a given number n, using this sort of algorithm will require to get all the possible number combinations one by one.

Ord algorithm constructs the vectors TITLE, T_INDEX, and KEYWORDS.

Two rectangles do not overlap if one of the following conditions is true.

1) One rectangle is above top edge of other rectangle.

2) One rectangle is on left side of left edge of other rectangle.

Two elements arr[i] and arr[j] in an array arr [] form an inversion if a[i] > a[j] and i < j. How to count all inversions in an unsorted way.

The idea is to use divide and conquer here to do it in O (Logn) time.

Insertion sort is an in-place sorting algorithm which means that it requires no extra or little. Storage. For insertion sort, it requires only single list elements to be stored out-side the initial data, making o (1) in the space-complexity.

Encryption is the process of converting plaintext into a secret code format referred as “Cipher text”. To convert the text, algorithm uses a string of bits referred as “keys” for calculations. The larger the key, the greater the number of potential patterns for creating cipher text. Most encryption algorithm use codes fixed blocks of input that have length about 64 to 128 bits, while some uses stream method.

For example, in this problem, the base case is you reach to the leaf node and there is no more node to explore. At that point in time, recursion starts to wind down. Here are the exact steps to traverse the binary tree using inOrder traversal:

1. Visit left node

2. Print value of the root

3. Visit right node\ and here is the sample code to implement this algorithm using recursion in Java:

private void inOrder(TreeNode node) { if (node == null) { return; } inOrder(node.left); System.out.printf("%s ", node.data); inOrder(node.right); }

Stassen’s Matrix multiplication can be performed only on **square matrices** where **n** is a **power of 2**. Order of both of the matrices are **n × n**.

There are some procedures:

- Divide a matrix of order of 2*2 recursively till we get the matrix of 2*2.
- Use the previous set of formulas to carry out 2*2 matrix multiplication.
- In this eight multiplication and four additions, subtraction are performed.
- Combine the result of two matrixes to find the final product or final matrix.

**Strassen’s matrix multiplication** there are seven multiplication and four addition, subtraction in total.

```
D1 = (a11 + a22) (b11 + b22)
D2 = (a21 + a22).b11
D3 = (b12 – b22).a11
D4 = (b21 – b11).a22
D5 = (a11 + a12).b22
D6 = (a21 – a11) . (b11 + b12)
D7 = (a12 – a22) . (b21 + b22)
C11 = d1 + d4 – d5 + d7
C12 = d3 + d5
C21 = d2 + d4
C22 = d1 + d3 – d2 – d6
```

begin If n = threshold then compute C = a * b is a conventional matrix.

Else Partition a into four sub matrices a11, a12, a21, a22.

Partition b into four sub matrices b11, b12, b21, b22.

Strassen ( n/2, a11 + a22, b11 + b22, d1) Strassen ( n/2, a21 + a22, b11, d2) Strassen ( n/2, a11, b12 – b22, d3) Strassen ( n/2, a22, b21 – b11, d4) Strassen ( n/2, a11 + a12, b22, d5) Strassen (n/2, a21 – a11, b11 + b22, d6) Strassen (n/2, a12 – a22, b21 + b22, d7) C = d1+d4-d5+d7 d3+d5 d2+d4 d1+d3-d2-d6

end if return (C) end.

To exploit the relationship between the solution of a given instance of a problem and the solution of a smaller instance of the same problem. By reducing successively the problem’s dimension we eventually arrive to a particular case which can be solved directly.

It is such an approach could lead us to an algorithm which is more efficient than a brute force algorithm • sometimes it is easier to describe the solution of a problem by referring to the solution of a smaller problem than to describe explicitly the solution.

Best-first search is a search algorithm, which explores a graph by expanding the most promising node chosen according to a specified rule.

Judea Pearl described best-first search as estimating the promise of node n by a “heuristic evaluation function f(n) which, in general, may depend on the description of n, the description of the goal, the information gathered by the search up to that point, and most important, on any extra knowledge about the problem domain.”

Power2(x,m) p ← x*x FOR i ← 1, m-1 DO p ← p*p ENDFOR RETURN p Analysis: a) Correctness Loop invariant: p=x2^i b) Complexity (i) problem size: m (ii) dominant operation: * T(m) = m Remark: m=log n

DFS(G, u) u.visited = true for each v ∈ G.Adj[u] if v.visited == false DFS(G,v) init() { For each u ∈ G u.visited = false For each u ∈ G DFS(G, u)

BFS(G,start_v): let Q be a queue label start_v as discovered Q.enqueue(start_v) while Q is not empty v = Q.dequeue() if v is the goal: return v for all edges from v to w in G.adjacentEdges(v) do if w is not labeled as discovered: label w as discovered w.parent = v Q.enqueue(w)

O(|V|) = O(b^d)

O (V+E) Say, you have a connected graph with V nodes and E edges. In **DFS**, you traverse each node exactly once. Therefore, the time **complexity of DFS** is at least O (V)

COMPARISON | BFS | DFS |

Basic | Vertex-based algorithm | Edge-based algorithm |

Data structure used to store the nodes | Queue | Stack |

Memory consumption | Inefficient | Efficient |

Structure of the constructed tree | Wide and short | Narrow and long |

Traversing fashion | Oldest unvisited vertices are explored at first. | Vertices along the edge are explored in the beginning. |

Optimality | Optimal for finding the shortest distance, not in cost. | Not optimal |

Applications | Examines bipartite graph, connected component and shortest path present in a graph. | Examines two-edge connected graph, strongly connected graph, acyclic graph and topological order. |

**Topological Sort**: A **topological sort** or **topological ordering** of a directed graph is a linear ordering of its vertices such that for every directed edge UV from vertex u to vertex v, u comes before v in the ordering. A topological ordering is possible if and only if the graph has no directed cycles, that is, if it is a directed acyclic graph.

As in the image above, the topological order is 7 6 5 4 3 2 1 0. There can be one or more topological order in any graph. Like in the example above 7 5 6 4 2 3 1 0 is also a topological order.

L ← Empty list that will contain the sorted elements S ← Set of all nodes with no incoming edge while S is non-empty do remove a node n from S add n to tail of L for each node m with an edge e from n to m do remove edge e from the graph if m has no other incoming edges then insert m into S if graph has edges then return error (graph has at least one cycle) else return L

Either the problem or algorithm can be transformed in one of three ways

**Instance simplification**: the instances of the problem can be transformed into an easier instance to solve.**Representation change**: the data structure can be transformed so that it is more efficient.**Problem reduction**: the problem can be transformed to an easier problem to solve.

“Presorting” is a common example of “instance simplification.” Presorting is sorting ahead of time, to make repetitive solutions faster. If you wish to find many *k*th statistics in an array then it might make sense to sort the array ahead of time for so that the cost for determining each statistics is constant time.

Presorting is a form of preconditioning. Preconditioning is manipulating the data to make the algorithm faster.

Another example is the problem to determine the uniqueness of array elements. The brute force algorithm would compare each array element with the rest of the array. The cost would be Θ(*n*2). If we presort the array then the algorithm only needs to compare adjacent elements for uniqueness. The cost for determining uniqueness (without the sorting cost) is Θ(*n*).

The total cost is

`T (n) = Tsort (n) + Tscan (n) ε Θ (n log n) + Θ (n) = Θ (n log n)`

Algorithm PresortMode (A[0…n-1]) // assumes that A is sorted i ← 0 Modefrequency ← 0 While i < n do Runlength ← 1 Runvalue ← A[i] while i+runlength < n and A[i+runlength] = runvalue do runlength++ if runlength > modefequency then modefrequency ← runlength modevalue ← runvalue return modevalue

Heap Sort is a popular and efficient sorting algorithm in computer programming. Learning how to write the heap sort algorithm requires knowledge of two types of data structures – arrays and trees.

The initial set of numbers that we want to sort is stored in an array e.g. `[10, 3, 76, 34, 23, 32]`

and after sorting, we get a sorted array `[3,10,23,32,34,76]`

Heap sort works by visualizing the elements of the array as a special kind of complete binary tree called heap.

**Properties of Heaps**

- The height of a heap is floor (log
*n*). - The root contains the highest priority item.
- A node and all the descendants is a heap
- A heap can be implemented using an array and all operations are in-place.
- If index of the root = 1 then index of left child = 2
*i*and right child = 2*i*+1 - Level
*i*of the heap has 2 power i elements - heap order, the parent value is larger than the children value.

HeapBottomUp (H[1…n]) for i ← floor (n/2) down to 1 do k ← i // parent index v ← H[k] // proposed parent value heap ← false while not heap and 2*k ≤ n do J ← 2*k if j < n then // there are children if H[j] < H[j+1] then j++ // find the larger child value if v ≥ H[j] then heap ← true // check heap ordering mode else // correct the heap ordering H[k] ← H[j] // bubble down k ← j // move down the heap H[k] ← v // insert the value

Floyd-Warshall algorithm will tell the optimal distance between *each* pair of friends. It will clearly tell you that the quickest path from one home to another house or from source to destination is the connecting edge that has a weight of 1.

Create a |V| x |V| matrix, M, that will describe the distances between vertices For each cell (i, j) in M: if i == j: M[i][j] = 0 if (i, j) is an edge in E: M[i][j] = weight(i, j) else: M[i][j] = infinity for k from 1 to |V|: for i from 1 to |V|: for j from 1 to |V|: if M[i][j] > M[i][k] + M[k][j]: M[i][j] = M[i][k] + M[k][j]

The Floyd-Warshall algorithm runs in O\big (|V|^ {3}\big)* O* (∣*V*∣ to the power 3) time.

Bellman Ford’s algorithm is used to find the shortest paths from the source vertex to all other vertices in a weighted graph. It depends on the concept of Shortest path contains at most n−1 edges, because the shortest path couldn’t have a cycle.

Vector v [2000 + 10]; int dis [1000 + 10]; for(int i = 0; i < m + 2; i++){ v[i].clear(); dis[i] = 2e9; } for(int i = 0; i < m; i++){ scanf(“%d%d%d”, &from , &next , &weight); v[i].push_back(from); v[i].push_back(next); v[i].push_back(weight); } dis[0] = 0; for(int i = 0; i < n – 1; i++){ int j = 0; while(v[j].size() != 0){ if(dis[ v[j][0] ] + v[j][2] < dis[ v[j][1] ] ){ dis[ v[j][1] ] = dis[ v[j][0] ] + v[j][2]; } j++; } }

Time Complexity of Bellman Ford algorithm is relatively high O (v, E), in case E=V square, O (E cube).

The **knapsack problem** or rucksack **problem** is a **problem** in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible.

- List item Total number of items
- List item Weight
- List item Value
- List item If item is fragile or not (true/false)

**Constrains:**

- List item Maximal weight of knapsack
- List item Maximum number of fragile items knapsack can contain.

It looks like a modification to knapsack would solve it.

```
Let’s say we have N items, maximum knapsack weight is W, and max amount of fragile items is F
let’s define our dp table as 3-dimensional array dp[N+1][W+1][F+1]
p[N+1][W+1][F+1] // memo table, initially filled with -1
int solve(n,w,f)
{
if(n > N)return 0;
if(dp[n][w][f] != -1) return dp[n][w][f];
dp[n][w][f] = solve(n+1,w,f); //skip item
if(w + weight(n) <= W && f + isFragile(n) <=F)
dp[n][w][f] = max(dp[n][w][f], value(n) + solve(n+1, w + weight(n), f + isFragile(n)));
return dp[n][w][f]
}
print(solve(1,0,0))
for getting the sub set
vector<int> getSolution(n,w,f)
{
int optimalValue = solve(n,w,f);
vector<int>answer; //just some dynamic array / arrayList
while(n <= N)
{
if(solve(n+1,w,f) == optimalValue)n++; //if after skipping item we can still get optimal answer we just skip it
else //otherwise we cant so current item must be taken
{
int new_w = w + weight(n);
int new_f = f + isFragile(n);
answer.push_back(n); //so we just save its index, and update all values
optimalValue -= value(n);
n++;
w = new_w;
f = new_f;
}
}
return answer;
}
```

**Time complexity** of 0 1 **Knapsack** problem is O (nW) where, n is the number of items and W is the capacity of **knapsack**.

A **greedy algorithm** is an algorithmic paradigm that follows the problem solving heuristic of making the locally optimal choice at each stage with the hope of finding a global optimum. … A candidate set, from which a solution is created. A selection function, which chooses the best candidate to be added to the solution.

**Examples** of such **greedy algorithms** are Kruskal’s **algorithm** and Prim’s **algorithm** for finding minimum spanning trees, and the **algorithm** for finding optimum Huffman trees

Huffman **invented** a **greedy algorithm** that constructs an optimal prefix code called a Huffman code.

A minimum-cost spanning tree (MST) is one which has the smallest possible total weight. A graph might have more than one MST, but in the Prim’s** algorithm** the approach used by **Prim’s algorithm** to select a “starting” vertex, then “grow” the tree out from that point

(G, w, r) for each u V [G] do key[u] ← ∞ π[u] ← NIL key[r] ← 0 Q ← V [G] while Q ≠ Ø do u ← EXTRACT-MIN(Q) for each v Adj[u] do if v Q and w(u, v) < key[v] then π[v] ← u key[v] ← w(u, v)

Make the all vertex distance infinity from source and zero for the vertex from where you are starting.

Then make a min-heap with these values.

```
While (i<n-1) u=Extract-min For (all vertex adjacent to u) If (distance (adjacent vertex)>edge weight)
Then update the heap of the location v with edge weight other wise do not do anything.
This will generate the MST.
```

**Kruskal’s algorithm** is a minimum spanning tree **algorithm** that takes a graph as input and finds the subset of the edges of that graph which. Form a tree that includes every vertex.

- create a forest
*F*(a set of trees), where each vertex in the graph is a separate tree - create a set
*S*containing all the edges in the graph - while
*S*is not empty and*F*is not yet spanning tree- remove an edge with minimum weight from
*S* - if the removed edge connects two different trees then add it to the forest
*F*, combining two trees into a single tree

- remove an edge with minimum weight from

At the termination of the algorithm, the forest forms a minimum spanning forest of the graph. If the graph is connected, the forest has a single component and forms a minimum spanning tree.

KRUSKAL (G): 1 A = ∅ 2 foreach v ∈ G.V: 3 MAKE-SET(v) 4 foreach (u, v) in G.E ordered by weight (u, v), increasing: 5 if FIND-SET(u) ≠ FIND-SET(v): 6 A = A ∪ {(u, v)} 7 UNION (FIND-SET (u), FIND-SET(v)) 8 return A

**Dijkastra’s algorithm** (or **Dijkstra’s** Shortest Path First **algorithm**, SPF **algorithm**) is an **algorithm** for finding the shortest paths between nodes in a graph, which may represent, for example, road networks. It was conceived by computer scientist Edsger W. **Dijkstra** in 1956 and published three years later.

The **time complexity** for the matrix representation is O (V^2). In this post, O (ELogV) **algorithm** for adjacency list representation is discussed

t v1 be the origin vertex, // and initialize W and ShortDist[u] as X: = {v1} ShortDist[v1] :=0 FOR each u in V – {v1} ShortDist[u]:= T [v1, u] // Now repeatedly enlarge X // until W includes all verticies in V WHILE X <> V // Find the vertex w in V – X at the minimum distance // from v1 MinDist: = INFINITE FOR each v in V – W IF ShortDist[v] < MinDist MinDist = ShortDist[v] x: = v END {if} END {for} // Add x to X X: = W U {w} // Update the shortest distance to vertices in V – X FOR each u in V – X ShortDist[u]:= Min (ShorDist[u], ShortDist[w] + T [w, u]) END {while}

**Huffman coding** is a lossless **data** compression algorithm. In this algorithm, a variable-length **code** is assigned to input different characters. The **code** length is related to how frequently characters are used. … First one to create a **Huffman tree**, and another one to traverse the **tree** to find codes.

1. n=|C| 2. Q=C 3. for i=1 to n-1 4. do 5. z=allocate Node() 6. x=left[z] =Extract_Min (Q) 7. y=right[z] =Extract_Min (Q) 8. f[z] =f[x] +f[y] 9. Insert (Q,z) 10. Return Extract_Min(Q) Analysis The Q is initialized as a priority queue with the character C. Q=C can be performed by using Build Heap in O (n) time. For loop takes (|n|-1) times because each heap operation requires O (log n) time. Hence the total running time of Huffman code on the set of n characters is O (n log n).

**Lower bound** is the best case running time. **Lower bound** is determined by the easiest input. It provides a goal for all inputs.

**Upper bound** is the worst case and is determined by the most difficult input for a given **algorithm.**

**Branch and bound** is a systematic method for solving optimization problems. B&B is a rather general optimization **technique** that applies where the greedy method and dynamic programming fail.

The **computational complexity** of a problem is the minimum of the **complexities** of all possible **algorithms** for this problem (including the unknown **algorithms**). … Clearly, both areas are strongly related, as the **complexity of an algorithm** is always an upper bound of the **complexity** of the problem solved by this **algorithm**.

**Ranking Algorithm** Definition. An **algorithm** is a set of mathematical systems of calculations designed to create a result. Search Engines use **algorithms** to weigh varied elements to determine which webpage is most relevant to a search query.

**PageRank** is a link analysis **algorithm** and it assigns a numerical weighting to each element of a hyperlinked set of documents, such as the World Wide Web, with the purpose of “measuring” its relative importance within the set.