Duration : 00:06:00

A greedy algorithm in data structure solves a problem by choosing the best available option at each stage without revisiting previous decisions, even if they are suboptimal. It works top-down, focusing on the local best options to get the global best result. While it does not always yield the best solution, it can be used to solve problems with the Greedy Choice Property and Optimal Substructure.

We can determine whether the approach can be applied to any optimization problem if the problem has the following two properties:

- Greedy Choice Property.
- Optimal Substructure.

A greedy strategy can be used to solve a problem if an optimal solution can be obtained by selecting the best option at each step without revisiting prior phases.

If the optimal overall solution to the problem corresponds to the optimal solutions to its subproblems, then a greedy technique can be used to solve it.

- Greedy algorithms prioritize the best option at every phase.
- Decisions are final and cannot be altered.
- They may not always find the optimum solution, as seen in the Travelling Salesman Problem.
- Despite this, they are faster than other options such as dynamic programming.
- Proving their correctness can be difficult.
- When efficiency is low in complex issues, they provide near-optimal solutions.

- Greedy algorithms are simple, effective, and easy to implement.
- They identify optimal solutions faster than dynamic programming.
- Simplifying design and analysis, they make systematic decisions based on available data.
- They handle large inputs effectively by making local optimal choices, thanks to their low time complexity.
- Greedy algorithms use very little memory, storing only the current best response.

- Greedy algorithms may not always produce an optimal result.
- Short-term optimal decisions can result in long-term undesirable outcomes.
- Greedy algorithms prioritize local optimal choices above the overall solution, sometimes producing poor outcomes.
- Greedy algorithms make final decisions that cannot be revisited, even if better possibilities emerge later, potentially leading to inferior or inaccurate solutions.
- Greedy algorithms are less effective in handling various issue instances since their performance is largely dependent on the criteria used in decision-making and the structure of the problem.

- Selection Sort
- Minimum Spanning Tree
- Primâ€™s Algorithm
- Dijkstraâ€™s Algorithm
- Kruskalâ€™s Algorithm
- Huffman Coding
- Ford-Fulkerson Algorithm

In a minimum spanning tree, the total of edge weights is kept to a minimum. The minimal spanning tree applications are:

- To find paths on a map.
- To create networks such as telecommunications, water supply, and power grids.

Prim's minimal spanning tree approach starts with an empty spanning tree and maintains two sets of vertices: one for those currently in the MST and one for those still to be added. It iteratively selects the minimal weight edge and adds its endpoints to the MST set, resulting in an O(E log V) time complexity.

Using the input graph, the algorithm determines the subset of edges that create a tree containing all indices with the lowest overall weight. It selects edges iteratively, starting with the edge with the lowest weight and working its way up to the global optimum, which it hopes to reach by encompassing all vertices.

The steps required to implement Kruskal's algorithm are as follows:

- Sort the edges by weight, from low to high.
- Select the edge with the lowest weight and add it to the spanning tree.
- If adding the edge resulted in a cycle, then reject it.
- Continue to add edges until we've reached all vertices.

Dijktra's Algorithm finds the shortest path between any two graph vertices. The shortest distance between two vertices may not include all of the graph's vertices, hence it differs from the minimal spanning tree. Dijktra's Algorithm operates on the assumption that each subpath B->D of the shortest path A->D between vertices A and D is likewise the shortest path B->D.

- Start with a weighted graph.
- Set the initial path values to infinity for all vertices except the first one.
- If a shorter path is determined, update the path lengths for each vertex.
- Skip updating path lengths on visited vertices.
- Choose the unvisited vertex with the least path length in each iteration.
- Continue iterating until all vertices are visited.

David Huffman created Huffman Coding, a technique that compresses data without losing features. It is useful to compress data that contains frequently recurring characters.

- Calculate the character frequencies in the string.
- Sort characters by frequency and store them in a priority queue (Q).
- Make leaf nodes for each unique character.
- Construct the Huffman tree.
- Create an empty node (z).
- Set minimum frequencies for its left and right children.
- Set z's value to the total of these frequencies.
- Remove these frequencies from Q, then add the sum to the frequency list.
- Insert z into the tree.
- For non-leaf nodes, assign 0 to the left edges and 1 to the right.

Ford-Fulkerson Algorithms determine the maximum flow in networks or graphs. In this graph, each edge has the capacity. Two vertices are provided: Source and Sink. The source vertex has all outward edges and no inward edges, while the sink has all interior edges but no outward edges.

- Set the initial flow in all edges to zero.
- Include an augmenting path from source to sink in the flow.
- Update the residual graph.
- Consider reversing pathways to guarantee maximum flow.