|
Algorithm | Reference number | Merits | Demerits | Compared with |
|
Least-squares algorithm | [4] | (1) High calculation speed; (2) High precision of solution | (1) The appropriate initial model needs to be given; (2) The partial derivative needs to be calculated; (3) Easy to fall into local minima | None |
|
Levenberg–Marquardt algorithm combined with the singular value decomposition technique | [5] | (1) High calculation speed; (2) Excellent stability | (1) The appropriate initial model needs to be given; (2) The partial derivative needs to be calculated; (3) Easy to fall into local minima | None |
|
Occam algorithm | [6] | (1) High calculation speed; (2) High precision of solution; (3) Excellent stability | (1) The appropriate initial model needs to be given; (2) The partial derivative needs to be calculated; (3) Easy to fall into local minima | None |
|
Genetic algorithm | [7] | (1) Excellent ability to escape from local minima; (2) Independent of selecting the initial model; (3) Calculation of partial derivatives is avoided | (1) Huge computational time cost; (2) Low accuracy of calculation | None |
|
Genetic algorithm combining elite selection and dynamic mutation strategy | [8] | (1) Excellent stability; (2) Excellent ability to escape from local minima; (3) Independent of selecting the initial model; (4) Calculation of partial derivatives is avoided | (1) Huge computational time cost; (2) Low accuracy of calculation | Marquardt algorithm |
|
Genetic algorithms combining marginal posterior probability density estimation | [9] | (1) Excellent ability to escape from local minima; (2) Independent of selecting the initial model; (3) Calculation of partial derivatives is avoided | (1) Huge computational time cost; (2) Low accuracy of calculation | None |
|
Heat-bath simulated annealing algorithm | [10] | (1) Excellent ability to escape from local minima; (2) Independent of selecting the initial model; (3) Calculation of partial derivatives is avoided; (4) Suitable for parallel programming | (1) Low accuracy of calculation | Levenberg–Marquardt algorithm and fast simulated annealing algorithm |
|
Artificial neural network | [18] | (1) Excellent stability; (2) High inversion efficiency | (1) Requires large amounts of training data; (2) Training the network costs a lot of time | Monte Carlo approach and gray wolf optimizer |
|
LSTM based on the first height last velocity | [19] | (1) Excellent stability; (2) High inversion efficiency | (1) Requires large amounts of training data; (2) Training the network costs a lot of time | None |
|