Commonly used algorithms are: 1. Divide and conquer method; 2. Greedy algorithm, a simpler and faster design technology for certain optimal solution problems; 3. Dynamic programming algorithm; 4. Backtracking method, a kind of optimal search method; 5. Branch and bound method.
#The five most commonly used algorithms are: divide and conquer method, greedy algorithm, dynamic programming algorithm, backtracking method, and branch and bound method.
What is an algorithm?
Algorithm refers to an accurate and complete description of a problem-solving solution. It is a series of clear instructions for solving problems. Algorithm represents a systematic method to describe the strategic mechanism for solving problems.
It can be understood that an algorithm is a series of steps used to solve a specific problem; the algorithm must have the following three important characteristics:
1. Finiteness. After executing a finite number of steps, the algorithm must terminate.
2. Accuracy. Each step of the algorithm must be exactly defined.
3. Feasibility. A specific algorithm must be able to solve a specific problem in a specific amount of time.
The five most commonly used algorithms
divide and conquer method
The divide and conquer method is to A complex problem is divided into two or more identical or similar sub-problems, and then the sub-problems are divided into smaller sub-problems... until finally the sub-problems can be solved simply and directly, and the solution to the original problem is the merger of the solutions to the sub-problems. .
Problems that can be solved by the divide-and-conquer method generally have the following characteristics:
1). The problem can be easily solved when the scale of the problem is reduced to a certain extent;
2). This problem can be decomposed into several smaller identical problems, that is, the problem has optimal substructure properties;
3). The solutions to the sub-problems decomposed using this problem can be combined into The solution to this problem;
4), each sub-problem decomposed by this problem is independent of each other, that is, there are no common sub-sub-problems between the sub-problems.
Greedy algorithm
The greedy algorithm is a simpler and faster design technology for certain optimal solution problems.
The greedy method design algorithm is characterized by proceeding step by step. It often makes the optimal choice based on the current situation and based on an optimization measure, without considering various possible overall situations. It eliminates the need to find The optimal solution must exhaust a lot of time to exhaust all possibilities. It uses top-down and iterative methods to make successive greedy choices. Each time a greedy choice is made, the desired problem is simplified into a smaller sub-set. Problem, through greedy selection at each step, an optimal solution to the problem can be obtained. Although it is guaranteed to obtain a local optimal solution at each step, the resulting global solution is sometimes not necessarily optimal, so the greedy algorithm does not Backtrace.
Dynamic programming algorithm
Dynamic programming is a method used in mathematics and computer science to solve optimization problems that contain overlapping subproblems. The basic idea is to decompose the original problem into similar sub-problems, and in the process of solving the problem, the solution to the original problem is obtained through the solutions of the sub-problems. The idea of dynamic programming is the basis of many algorithms and is widely used in the fields of computer science and engineering.
Dynamic programming methods are usually used to solve optimization problems. This type of problem can have many feasible solutions. Each solution has a value. Finding the solution with the optimal value is called an optimal solution to the problem. Instead of the optimal solution, there may be multiple solutions that all reach the optimal value.
Steps to design a dynamic programming algorithm:
1), characterize the structural characteristics of an optimal solution
2), recursively define the value of the optimal solution
3), calculate the value of the optimal solution, usually using the bottom-up method
4), use the calculated information to construct an optimal solution
Dynamic programming and divide and conquer method Similar to the solution of the original problem, the solution of the sub-problem is combined to solve the solution of the original problem. The difference from the divide-and-conquer method is that the sub-problems of the divide-and-conquer method exist independently of each other, while dynamic programming is applied when the sub-problems overlap.
Backtracking method
The backtracking method (exploration and backtracking method) is a optimization search method that searches forward according to the optimization conditions to achieve the goal. But when you reach a certain step in exploration and find that the original choice is not optimal or cannot achieve the goal, you will take a step back and make another choice. This technique of going back and trying again when it doesn't work is the backtracking method, and the point in a certain state that satisfies the backtracking conditions Called the "backtrack point".
The basic idea is that in the solution space tree containing all solutions to the problem, according to the depth-first search strategy, the solution space tree is deeply explored starting from the root node. When a node is explored, it is necessary to first determine whether the node contains the solution to the problem. If it does, continue exploring from this node. If the node does not contain the solution to the problem, proceed layer by layer to its ancestors. Node backtracking.
Branch and Bound Method
The branch and bound method is a very widely used algorithm. The use of this algorithm is very skillful, and different types of problem solutions have different methods. Are not the same.
The basic idea of the branch-and-bound method is to search the space of all feasible solutions (a limited number) to the optimization problem with constraints. When the algorithm is specifically executed, the entire feasible solution space is continuously divided into smaller and smaller subsets (called branches), and a lower or upper bound (called delimitation) is calculated for the value of the solution in each subset. ). After each branch, no further branches will be made to those subsets whose limits exceed the known feasible solution value. In this way, many subsets of the solution (that is, many nodes on the search tree) can be ignored, thus narrowing the search scope. This process continues until a feasible solution is found whose value is no larger than the bounds of any subset. Therefore, this algorithm can generally obtain the optimal solution.
The above is the detailed content of What are the five most commonly used algorithms?. For more information, please follow other related articles on the PHP Chinese website!