Algorithm

Getting Started with AlgorithmWhat is an Algorithm?

Characteristics of Algorithm1 Topic

Analysis Framework

Performance Analysis3 Topics

Mathematical Analysis2 Topics

Sorting AlgorithmSorting Algorithm10 Topics

Searching Algorithm6 Topics

Fundamental of Data StructuresStacks

Queues

Graphs

Trees

Sets

Dictionaries

Divide and ConquerGeneral Method

Binary Search

Recurrence Equation for Divide and Conquer

Finding the Maximum and Minimum

Merge Sort

Quick Sort

Stassen’s Matrix Multiplication

Advantages and Disadvantages of Divide and Conquer

Decrease and ConquerInsertion Sort

Topological Sort

Greedy MethodGeneral Method

Coin Change Problem

Knapsack Problem

Job Sequencing with Deadlines

Minimum Cost Spanning Trees2 Topics

Single Source Shortest Paths1 Topic

Optimal Tree Problem1 Topic

Transform and Conquer Approach1 Topic

Dynamic ProgrammingGeneral Method with Examples

Multistage Graphs

Transitive Closure1 Topic

All Pairs Shortest Paths6 Topics

BacktrackingGeneral Method

NQueens Problem

Sum of Subsets problem

Graph Coloring

Hamiltonian Cycles

Branch and Bound2 Topics

0/1 Knapsack problem2 Topics

NPComplete and NPHard Problems1 Topic
Advantages and Disadvantages of Divide and Conquer
ADVANTAGES :
1.Solving difficult problems
Divide and conquer is a powerful tool for solving conceptually difficult problems: all it requires is a way of breaking the problem into subproblems, of solving the trivial cases and of combining subproblems to the original problem. Similarly, decrease and conquer only requires reducing the problem to a single smaller problem, such as the classic Tower of Hanoi puzzle, which reduces moving a tower of height n to moving a tower of height n − 1.
2.Algorithm efficiency
The divideandconquer paradigm often helps in the discovery of efficient algorithms. It was the key, for example, to Karatsuba’s fast multiplication method, the quicksort and mergesort algorithms, the Strassen algorithm for matrix multiplication, and fast Fourier transforms.
In all these examples, the Divide and Conquer approach led to an improvement in the asymptotic cost of the solution. For example, if (a) the base cases have constantbounded size, the work of splitting the problem and combining the partial solutions is proportional to the problem’s size n, and (b) there is a bounded number p of subproblems of size ~ n/p at each stage, then the cost of the divideandconquer algorithm will be O(n log_{p}n).
3.Parallelism
Divide and conquer algorithms are naturally adapted for execution in multiprocessor machines, especially sharedmemory systems where the communication of data between processors does not need to be planned in advance, because distinct subproblems can be executed on different processors.
4.Memory access
Divideandconquer algorithms naturally tend to make efficient use of memory caches. The reason is that once a subproblem is small enough, it and all its subproblems can, in principle, be solved within the cache, without accessing the slower main memory. An algorithm designed to exploit the cache in this way is called cacheoblivious, because it does not contain the cache size as an explicit parameter. Moreover, D&C algorithms can be designed for important algorithms (e.g., sorting, FFTs, and matrix multiplication) to be optimal cacheoblivious algorithms–they use the cache in a probably optimal way, in an asymptotic sense, regardless of the cache size. In contrast, the traditional approach to exploiting the cache is blocking, as in loop nest optimization, where the problem is explicitly divided into chunks of the appropriate size—this can also use the cache optimally, but only when the algorithm is tuned for the specific cache size(s) of a particular machine.
5.Roundoff control
In computations with rounded arithmetic, e.g. with floating point numbers, a divideandconquer algorithm may yield more accurate results than a superficially equivalent iterative method. For example, one can add N numbers either by a simple loop that adds each datum to a single variable, or by a D&C algorithm called pairwise summation that breaks the data set into two halves, recursively computes the sum of each half, and then adds the two sums. While the second method performs the same number of additions as the first, and pays the overhead of the recursive calls, it is usually more accurate.
DISADVANTAGES :
One of the most common issues with this sort of algorithm is the fact that the recursion is slow, which in some cases outweighs any advantages of this divide and conquer process.
Another concern with it is the fact that sometimes it can become more complicated than a basic iterative approach, especially in cases with a large n. In other words, if someone wanted to add a large amount of numbers together, if they just create a simple loop to add them together, it would turn out to be a much simpler approach than it would be to divide the numbers up into two groups, add these groups recursively, and then add the sums of the two groups together.
Another downfall is that sometimes once the problem is broken down into sub problems, the same sub problem can occur many times. In cases like these, it can often be easier to identify and save the solution to the repeated sub problem, which is commonly referred to as memorization.
The last recognizable implementation issue is that these algorithms can be carried out by a nonrecursive program that will store the different sub problems in things called explicit stacks, which gives more freedom in deciding just which order the sub problems should be solved.
These implementation issues do not make this process a bad decision when it comes to solving difficult problems, but rather this paradigm is the basis of many frequently used algorithms.