Definition:
An algorithm is any well defined computational procedure that takes some value, or a set of values, as input and produces some value, or a set of values, as output. An algorithm is thus a sequence of computational steps that transform the input into the output
Algorithm is a tool for solving a well specified computational problem. The statement of the problem specifies in general terms the desired input/output relationship. The algorithm describes a specific computational procedure for achieving that input/output relationship. Or Algorithm is a step by step procedure to solve a particular problem within some specific time, with a correct output. Types of algorithmMost common types of algorithms are

1. Divide and conquer approach:
The divide and conquer paradigm involves three steps at each level of recursion.
 Divide.
 Conquer.
 Combine.
DIVIDE:
The problem is divided into a number of subproblems.
CONQUER:
the subproblems are than solved recursively. If the subproblem sizes are small enough, just solve the subproblems in a straightforward manner.
COMBINE:
in this step, the solutions of the subproblems are than combined into the solution for the original problem.
2. DYNAMIC PROGRAMMING:
Dynamic programming, like the divide and conquer approach, solves problems by combining the solutions to subproblems. (“Programming” in the context refers to a tabular method, not to writing computer code). Dynamic programming is applicable when the subproblems are nor independent, that is when subproblems share subproblems.
Dynamic programming is typically applied to optimization problems. In such problems there can be many solutions. Each solution has a value, and we wish to find a solution with the optimal (minimum or maximum) value. We call such a solution an optimal solution to a problem, as opposed to the optimal solution, since there may be several solutions that achieve the optimal value.
The development of a dynamic programming algorithm can be broken into a sequence of four steps.
 Characterize the structure of an optimal solution.
 Recursively define the value of an optimal solution.
 Compute the value of an optimal solution in a bottom up fashion.
 Construct an optimal solution from computed information.
3: GREEDY ALGORITHM:
A greedy algorithm always makes the choice in the hope that looks best at the moment. That is, it makes a locally optimal choice in the hope that the choice will lead to a globally optimal solution. Greedy algorithms do not always yield optimal solutions, but for many problems they do. The greedy method is quiet powerful and works well for a wide range of problems. A greedy algorithm obtains an optimal solution to a problem by making a sequence of choices. For each decision point in the algorithm, the choice that seems best at the moment is chosen. This heuristic strategy does not always produce an optimal solution, but in some problems, like activity selection problem, sometimes it does.
Following steps are involved in a greedy algorithm
 Determine the optimal structure of the problem.
 Develop a recursive solution.
 Prove that at any stage of the recursion, one of the optimal choices is the greedy choice, thus, it is always safe to make the greedy choice.
 Show that all but one of the subproblems induced by having made the greedy choice is empty.
 Develop a recursive algorithm that implements the greedy strategy.
 Convert the recursive algorithm to an iterative algorithm.
GREEDY CHOICE PROPERTY:
The first key ingredient is the greedy choice property: a globally optimal solution can be arrived at by making a locally optimal (greedy) choice. In other words, when we are considering which choice to make, we make the choice that looks best in the current problem, without considering results from subproblems.
Difference between greedy algorithm and dynamic programming algorithm:
In dynamic programming algorithm, we make a choice at each step, but the choice usually depends on the solution of subproblems. We typically solve dynamic programming problems in a bottom up manner, progressing from smaller subproblems to larger subproblems. In a greedy algorithm, we make whether choice seems best at the moment and then solve the subproblem arising after the choice is made. The choice made by the greedy algorithm may depend on the choices so far, but it cannot depend on any future choices or on the solutions to subproblems. Thus unlike dynamic programming, which solves the subproblems bottom up, a greedy strategy usually, progresses in a top down fashion, making one greedy choice after another, reducing each given problem instance to a smaller one.
← Keys & Types of Keys in Database  Super, Candidate, Primary, Foreign etc  Stack  Definition, Insertion, Deletion Operations on Stack → 
