Table of Contents Author Guidelines Submit a Manuscript
Journal of Sensors
Volume 2018, Article ID 6357905, 9 pages
https://doi.org/10.1155/2018/6357905
Research Article

Study of the Magnetic Properties of Haematite Based on Spectroscopy and the IPSO-ELM Neural Network

1Intelligent Mine Research Center, Northeastern University, Shenyang 110819, China
2Information Science & Engineering School, Northeastern University, Shenyang 110004, China
3School of Science, Shenyang Jianzhu University, Shenyang 1100000, China
4College of Control Technology, Le Quy Don Technical University, Hanoi 100000, Vietnam

Correspondence should be addressed to Dong Xiao; nc.ude.uen.esi@gnodoaix

Received 14 July 2018; Revised 26 August 2018; Accepted 24 September 2018; Published 28 November 2018

Academic Editor: Harith Ahmad

Copyright © 2018 Yachun Mao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The detection of the magnetic properties of haematite plays an important role in the adjustment of the beneficiation process of haematite and the improvement of metal recovery. The existing methods for measuring the magnetic properties of iron ore either have large errors or take a long time. Therefore, it is very necessary to find a method that can quickly and accurately detect the magnetic properties of haematite. This paper presents a method to detect the magnetic properties of haematite based on the extreme learning machine based on the improved particle swarm optimization (IPSO-ELM) algorithm and spectroscopy. The improved particle swarm optimization algorithm is used to optimize the input weights, hidden layer deviations, and hidden layer nodes of the ELM network. Introducing the linear decreasing inertia weight for the particle swarm algorithm, taking into account the norm of the output weight in the particle update process and using the variation idea to change the length of the particle give the IPSO-ELM better stability and generalization ability. The experimental results show that the IPSO-ELM prediction model has a good prediction performance and has better generalization ability than that of the ELM and PSO-ELM prediction models. Compared with traditional chemical analysis methods and manual methods, this method has great advantages in terms of economy, speed, and accuracy.

1. Introduction

Iron ore is the main raw material for production of steel, and haematite is one of the main types of iron ore. The beneficiation process of iron ore is an important process in the production process of iron ore. The choice of beneficiation process plays a decisive role in the quality of iron ore. Taking the Anshan mining area as an example, the iron content of the ore is 20–40%, with an average of 30%. Ore must be beneficiated, and the iron content after selection can reach more than 60%. For haematite, the detection of its magnetic property will play an important role in the adjustment of its beneficiation process and the improvement of metal recovery. Studies have shown that [1] with the haematite having magnetic properties above 20%, if the weak magnetic separation process is adopted, the tailings grade is only approximately 7%, and the metal recovery rate is above 87%. Haematite with magnetic properties between 10% and 20% uses a weak magnetic separation process in which the tailings grade will increase to approximately 14%, and the metal recovery rate will be between 64% and 70%. For the haematite with its magnetic properties below 10%, if a single weak magnetic separation process is used, the recovery rate of metals will be very low, and a joint process consisting of strong magnetic separation, reelection, and flotation is required. There are two existing methods for detecting the magnetic property of existing iron ore. One method is to use a magnetometer to detect it. The speed of this method is relatively fast. However, in the course of use, the magnetometer is susceptible to interference from the surrounding environment and generates large errors, and the percentage of magnetic iron ore samples cannot be accurately determined [2]. The second method is the acid dissolution assay method. This method can accurately detect the content of magnetic iron ore in iron ore. However, this method is expensive and takes a long time. Therefore, to find a method that can quickly and accurately detect the magnetic properties of haematite is of great significance to the selection of the beneficiation process of haematite ore. Spectral analysis is a method of identifying substances and determining their chemical composition and relative content based on the spectrum of a substance. This method has been widely used in classification, the grade identification of rocks and minerals, food quality inspection, and many other aspects due to its advantages of a high analysis speed, simple operation, low cost, and high efficiency [36]. The factors influencing the magnetic property of iron ore are mainly the content of the magnetic minerals in iron ore. Since the oxides of silicon, magnesium, and calcium in haematite will affect the spectral data of haematite, the visible and infrared spectral data of haematite often contain considerable chemical information unrelated to the detection of the magnetic properties of this mineral, which makes the spectral data of haematite high dimensional and contain much redundant information. Therefore, the amount of spectral data of haematite must be reduced. Principal component analysis (PCA) [7] uses the idea of dimensionality reduction to convert multiple indicators into several comprehensive indicators (principal components), each of which can reflect most of the information of the original variables, and the information contained is not duplicated.

The extreme learning machine (ELM) neural network is a kind of single hidden layer feed-forward neural network proposed by Huang et al. in 2004, which has the advantages of fast learning and strong generalization [8]. Due to its advantages of fast learning and strong generalization ability, the algorithm has been widely studied and applied in recent years [911]. In the literature [12], an evolutionary ELM (E-ELM) was proposed which used the differential evolutionary algorithm to select the input weights and Moore-Penrose (MP) generalized inverse to analytically determine the output weights. In the literature [13], an improved ELM was proposed by selecting input weights for an ELM with linear hidden neurons. This approach maintains testing accuracy with stable condition, but it was only limited to ELM with linear hidden neurons. In recent years, many studies have focused on the structural improvements of ELM, such as ELM with double hidden layers [14] and ELM with multiple hidden layers [15, 16]. The spectral data of haematite is a set of highly coupled and nonlinear matrices. Extreme learning machine, as a neural network model in the field of machine learning, has a powerful function in dealing with nonlinear fitting problems. Therefore, the ELM is used to process the spectral data of haematite to obtain the detection model of the haematite magnetic properties.

However, the ELM randomly selects the input weights and hidden biases, and there will be some weights and biases of 0. This approach will cause the ELM to require a large number of hidden layer nodes to achieve the desired effect, and there will be large differences in the results of each operation. The particle swarm optimization (PSO) algorithm is a global optimization algorithm proposed by Dr. Eberhart and Dr. Kennedy in 1995 [17]. Shi and Eberhart introduced a new parameter-inertial weight for particle swarm optimization, which enhanced the ability of the algorithm [18]. Because the PSO is relatively simple and easy to implement compared to other optimization algorithms, and since there are no excessive parameters to be adjusted, it has attracted the attention of many scholars. New achievements have been continuously made in the performance improvement and analysis of algorithms [19, 20], which are widely used in many areas [21]. In the literature [22], a modified particle swarm optimization (PSO) algorithm was presented; in this algorithm, the whole search space is divided into several grid cells improved the efficiency of the algorithm. In addition, simulated annealing is incorporated into the update of a particle’s local leader to prevent premature convergence of the swarm. The literature [23] presents the first study of multiobjective particle swarm optimization (PSO) for cost-based feature selection problems. In order to enhance the search capability of the proposed algorithm, a probability-based encoding technology and an effective hybrid operator, together with the ideas of the crowding distance, the external archive, and the Pareto domination relationship, are applied to PSO.

Therefore, the particle swarm optimization algorithm can be used to optimize the ELM input weights and hidden layer biases. In 2006, Xu and Shu proposed an extreme learning machine based on particle swarm optimization (PSO-ELM) [24]. To improve the performance of the PSO algorithm, Han et al. improved the generalization ability of the PSO-ELM by considering the norm of the output weight matrix [25]. However, the above improvement method does not consider the selection of the number of nodes in the hidden layer of the ELM neural network. Li et al. [26] introduced the mutation operator to optimize the number of hidden layer nodes of the ELM, which enhanced the population diversity and improved the convergence speed of the algorithm, but its generalization ability was poor. Based on the improvement of traditional ELM and the research on PSO, this paper uses the improved particle swarm optimization (IPSO) algorithm, which introduces the linearly decreasing inertia weight for the particle swarm algorithm, applies the idea of mutation to change the length of the particle, and considers the norm of the output weight in the process of particle updating in order to optimize the input weights, the hidden biases, and the number of hidden layer nodes of the ELM neural network. This approach improves the generalization ability and improves the convergence speed of the algorithm. An extreme learning machine based on the improved particle swarm optimization (IPSO-ELM) was used to process the spectral data of haematite to obtain a model for detecting the magnetic properties of haematite.

The key issues that need to be solved in this paper are three aspects: data acquisition, data processing, and model building. In the data acquisition phase, haematite samples from the mining area need to be collected and subjected to spectral and chemical tests to obtain spectral data of the sample and accurate magnetic properties. In the data processing stage, high-dimensional haematite spectral data needs to be dimensioned to facilitate the establishment of the model. In the model establishment stage, it is necessary to use the spectral data after the dimension reduction and the accurate magnetic rate to establish the haematite magnetic permeability detection model based on the IPSO-ELM algorithm proposed in this paper. The test results were analyzed to verify the feasibility of detecting the magnetic properties of haematite based on spectral data and ELM algorithm.

2. Related Work

2.1. Sample Preparation and Spectral Testing

The Anshan iron ore mine of Liaoning Province is one of the major iron ore mines in China, and the main type of iron ore in this area is haematite. Therefore, this study selected two mining areas in Anshan as experimental areas and collected haematite samples at the site. The sample is block shaped and the size is approximately 30 cm × 30 cm × 30 cm.

Core drilling and cutting are performed on the collected samples. During the processing of the experimental samples, the core is corroded along the direction of the silicon and iron formed in the ore and then cut along the direction perpendicular to the core cylinder to prepare circular lamellar experimental samples with a diameter of 6 cm and a thickness of 0.5–1 cm, as shown in Figure 1. During cutting, the thickness of the sample is as thin as possible in order to ensure that the distribution of various mineral components on the surface of the experimental sample is consistent with the overall sample and that the spectral test results of the sample have a good correspondence relationship with the chemical composition test results.

Figure 1: The prepared experimental samples.

For the spectral test of the experimental samples, we use a SVC HR-1024 portable ground spectrometer, as shown in Figure 2, the wavelength range is 0.35–2.5 μm, the number of channels is 1024, spectral resolution less than or equal to 8.5 nm, the spectral accuracy is better than +/−0.5 nm, and the minimum integration time is 1 s. To reduce the influence of aerosol and solar radiation propagation paths, the spectral testing conducted during the day from 10 : 00 to 14 : 00, and the sky is clear and cloudless when measured. During the spectral test of the haematite samples, adjust the wavelength of light, test the visible and near-infrared spectral data of the haematite samples, and record the reflectivity of the haematite samples to different wavelengths of light. The experimental range of the spectrum is 1343.1 nm–2502.1 nm.

Figure 2: SVC HR-1024 portable ground object spectrometer.

In this experiment, 91 experimental samples were collected. First, the samples were chemically analyzed to obtain the accurate magnetic property of the haematite samples. There were 64 samples with their magnetic properties below 10%, 19 samples at 10–20%, and 8 samples with their magnetic properties more than 20%. Then, the spectrum test was performed. In this experiment, the sampling integration time was set to 3 s and the field of view angle was 4°. The probe of the spectrometer was 0.5 m away from the surface of the haematite sample and is perpendicular to the surface of the sample. The spectrometer (SVC HR-1024) was used to perform five spectral tests on each sample, and then the spectral data were averaged to give the spectral data of the sample (973 dimensions). After the spectrum test is completed, the data of the test spectrum of the haematite sample is obtained by performing data preprocessing operations such as roughing out and band fitting on the measured spectral data. Figure 3 shows a spectral plot of a haematite sample.

Figure 3: Spectral curve of haematite samples.

The haematite spectral data matrix exhibits a high degree of nonlinearity, and the artificial neural network has a strong adaptability when dealing with nonlinear, highly coupled data samples. Compared with other neural network algorithms, the ELM neural network has the characteristics of fast operation speed and strong generalization ability. Therefore, in this paper, the ELM neural network is combined with the principle of spectral technology to collect data, establish models, and perform prediction experiments.

2.2. Extreme Learning Machine

The extreme learning machine is a feed-forward neural network composed of an input layer, hidden layer, and output layer. In the training process of the ELM model, the network weight and threshold parameters are randomly set before training and do not have to update with the training process. The number of hidden layer nodes is also set by experience before training. After the least squares algorithm, the ELM neural network can obtain the unique optimal solution without falling into the local optimal solution.

Suppose an extreme learning machine neural network has hidden layer nodes and training sample data , and .

Then, the output of the neural network is where is the weight vector connecting the th hidden neuron and input neurons. is the weight vector connecting the th hidden neuron and output neurons, and is the bias of the th hidden neuron. represents the output of hidden layer neurons, for the additive hidden nodes . The output of the neural network can also be expressed as follows. where is the output matrix of hidden layer, is the output weight matrix of the hidden layer, and is the expected output.

The traditional ELM algorithm randomly selects input weights and hidden biases. Training this network is equivalent to solving the least-squares solution of linear system .

Proof 1. Huang has proved that the minimum value of the least-squares solution of the linear system is as follows: is the Moore-Penrose generalized inverse of , and the minimum of the least squares solution of is unique.

2.3. Particle Swarm Optimization Algorithm

Particle swarm optimization (PSO) is a global optimization algorithm proposed by Kennedy and Eberhart, which is derived from the research on the predation behaviours of birds. The basic idea of the particle swarm algorithm is to find the optimal solution through the cooperation and information sharing among individuals in the group. Because the PSO algorithm is simple in its structure, easy to implement, and there are few parameters to adjust, it has been widely used in function optimization, neural network training, and other fields.

The particle swarm optimization algorithm is implemented as follows. In a group, each bird is abstracted as a particle and is extended to the -dimensional space. The location of particle in the -dimensional space is . The speed of the particle’s flight is , . Each particle has a fitness value determined by the objective function.

In each iteration, the particle passes through the best position that the particle itself goes through and the best position that the entire population passes through. The process continuously update the speed and position according to (6) and (7). where is the current number of iterations, and are learning factors, and is the inertia weight.

3. The Extreme Learning Machine Based on Improved Particle Swarm Optimization

In the traditional ELM algorithm, the input weights and the hidden biases are randomly selected. This approach will inevitably lead to some nonoptimal or unnecessary input weights and hidden biases. The output weight matrix of the ELM network is calculated based on the input weights and the hidden biases. Therefore, randomly selecting the input weights and the hidden biases of the ELM will increase the number of nodes in the hidden layer, causing the ELM to respond to the test set slower and having poor generalization ability. Similarly, in the ELM algorithm, the number of hidden layer nodes of the network is usually selected based on experience, and the selection of the hidden layer nodes will directly affect the structure of the network and affect the performance of the ELM network.

In this paper, the improved particle swarm optimization algorithm is used to optimize the input weights, the hidden biases, and the number of hidden layer nodes of the ELM network. The details of the algorithm are as follows.

First, the swarm is randomly generated. The length of each particle in the population is , is the number of input layer neurons, and is the number of hidden layer nodes. Each particle in the swarm is composed of a set of input weights and hidden biases:

All components in the particle are randomly initialized within the range of .

Second, in order to avoid overfitting the ELM, the fitness of each particle is adopted as the root mean squared error (RMSE) on the validation set.

Third, in the traditional PSO algorithm’s structure, the inertia weight is a fixed value. It has been proved that although PSO with the constant inertia weight has faster convergence speed, it tends to fall into local optimum in later periods [18]. This paper uses the linearly decreasing inertia weight to improve the PSO optimization performance. where , , is the current number of iterations and is the maximum number of iterations. At the beginning of the iteration, the inertia weight is larger to ensure the global search ability of the algorithm, and the inertia weight of the iteration is smaller. As a result the algorithm can effectively perform more accurate local optimization.

Fourth, as analyzed by Zhu et al. and Bartlett [12, 27], neural networks tend to have better generalization performance with the weights having smaller norms. Therefore, in order to improve the generalization capability of the ELM, in the process of updating the individual extremum locations and population extremum locations, both the fitness values of the particles and also the output matrix norm of the ELM network are considered. Use (10) and (11) to update individual best position and global best position of all particles. where , , and are the corresponding fitness values for the th particle, the best position of the th particle, and the global best position of all particles, respectively. , , and are the corresponding output weights that are set as the th particle, the best position of the th particle, and the global best position of all particle, respectively.

Fifth, in order to find the optimal number of hidden layer nodes, this paper introduces the idea of mutation in the update process of particles, changes the length of the particles in each update process, and then changes the number of hidden layer nodes of the ELM. Change the number of hidden layer nodes according to (12). where and are the number of hidden layer nodes for the th and iterations, respectively.

Sixth, update the speed and position of the particles. By changing the number n of hidden layer nodes in the ELM network, the next generation particle length of the particle group is .

After the particle is mutated, if the length of the particle increases, the particle velocity and position adjustment rules are as follows:

After the particle is mutated, if the particle length is shortened, the -column elements of the previous generation particle velocity and position are randomly selected.

After the particle is mutated, if the particle length does not change, the particle’s velocity and position are not adjusted.

Finally, the iteration is repeated until the maximum number of iterations is reached or the searched optimal position satisfies a predefined minimum fitness value.

The ELM neural network optimized by the above improved particle swarm optimization algorithm has the best input weights, hidden biases, and hidden layer node number.

4. Experiment Results and Discussion

The spectral data of the haematite sample obtained by the spectral measurement is 973-dimensional, which means that the dimension is too high, and the data have a strong correlation. Therefore, the PCA method is used to reduce the dimension of the obtained spectral data. The contribution rate of the principal components is shown in Figure 4. Take the dimension of its principal component as 5 (accumulated contribution rate reaches 99.9%) and use it as the input of the network. Therefore, the initial data are converted from the matrix structure of to the matrix structure of , which greatly facilitates the establishment of experimental models and saves training time.

Figure 4: Principal component contribution rate of the haematite spectral data.

Considering that the number of samples is too small, 91 samples are subjected to 10-fold cross-validation, that is, 91 samples are equally divided into 10 groups, each group is tested once, and the remaining 9 groups are used as training sets, and cross-validation is repeated 10 times. The root mean square error of the average cross-validation of 10 times is taken as a result.

To compare the performance of neural networks, in this paper, we use the ELM, PSO-ELM, and IPSO-ELM to establish a prediction model for the magnetic properties of haematite. The number of nodes in the hidden layer of the ELM is the optimal choice for multiple tests. The activation function of the hidden layer is . The population size is an integer. When the size of the population is small, it is very likely to fall into the local optimal solution. When the size is large, the optimization ability of the PSO is very good. But when the size of the population grows to a certain level, then the growth will no longer have a significant effect, and the larger the size, the greater the amount of calculation. The size of the population is generally 20 to 40. For difficult or specific categories, 100 to 200 can be obtained. The size of population used in this experiment is 40, and the optimal maximum number of iterations is 50 based on multiple runs. The performance of the prediction model of the magnetic property of the haematite was evaluated from the four aspects of the root mean square error, the number of hidden layer nodes, the prediction time, and the output matrix norm.

The simulation was conducted in the environment of MATLAB R2016a. The simulation results are shown in Figure 5 and Table 1. Figure 5 shows the prediction results of one of the cross-validations of the magnetic properties of the haematite test using the ELM, PSO-ELM, and IPSO-ELM models. Table 1 compares the performance of the ELM, PSO-ELM, and IPSO-ELM from the following four aspects: the root mean squared error (RMSE) of the test set, the time consumption of the model, the number of hidden layer nodes in the network, and the norm of the output weights. From Table 1, some conclusion can be drawn as follows.

Figure 5: Prediction results of the ELM, PSO-ELM, and IPSO-ELM.
Table 1: Performance comparison of the ELM, PSO-ELM, and IPSO-ELM.

First, the number of hidden layer nodes required for the PSO-ELM and IPSO-ELM is 18, while the number of hidden layer nodes required for the ELM is 28. Compared with the ELM, the PSO-ELM and IPSO-ELM can obtain smaller prediction errors with fewer hidden layer nodes. Therefore, the PSO-ELM and IPSO-ELM can use a simpler network structure to achieve better prediction results.

Second, the training time of the PSO-ELM and IPSO-ELM is obviously greater than the training time of the ELM. The time is mainly spent on the selection of input weight and deviation vector. Because the IPSO-ELM optimizes the hidden layer nodes, the running speed of the algorithm is improved relative to that of the PSO-ELM.

Third, with respect to the norm of the output weight matrix, the output matrix norm of IPSO-ELM is obviously smaller than the output matrix norm of the PSO-ELM and ELM, which indicates that the IPSO-ELM has better generalization performance than PSO-ELM and ELM do.

Finally, from the root mean square error of the test set, the root mean square error of the IPSO-ELM test set is smaller than that of the ELM and PSO-ELM, which can more accurately predict the magnetic properties of the haematite.

Table 2 shows the comparison of the manual detection methods, the chemical methods, and the detection methods based on haematite spectral data and the IPSO-ELM network proposed in this paper in terms of detection accuracy, detection time consumption, and detection costs. It can be seen that, although the manual detection method has low costs, the detection accuracy is low. Chemical assays have high precision but are costly and time consuming. The proposed method based on the spectral data and the IPSO-ELM algorithm to detect the magnetic property of haematite has the advantages of less time consumption, lower costs, and higher prediction accuracy, which can meet the requirements of industrial production.

Table 2: Comparison of different method of detecting magnetic properties.

In summary, the model for detecting the magnetic properties of haematite based on spectral data proposed in this paper is that the extreme learning machine based on the improved particle swarm optimization has the best input weighs, hidden biases, and hidden layer node numbers. Its generalization ability is better than that of the ELM and PSO-ELM, it is faster than the PSO-ELM is, and its prediction accuracy is also higher than that of the ELM and PSO-ELM. Compared with the traditional manual detection method, the method has the advantages of simple operation and high precision. Compared with chemical testing methods, the method has faster speed and lower cost.

5. Conclusion

In this paper, the model for detecting the magnetic properties of haematite based on spectral data and the IPSO-ELM network was proposed. The spectral data of 91 haematite samples were collected and analyzed using principal component analysis. The model for detecting the magnetic properties of haematite was established using the ELM and PSO-ELM. To further improve the detection accuracy and the generalization ability of the model, an extreme learning machine based on the improved particle swarm optimization was proposed in this paper. The experimental results show that the IPSO-ELM network can well detect the magnetic properties of haematite. Compared with ELM and PSO-ELM models, the IPSO-ELM network can better detect the magnetic properties of haematite and has better generalization ability. Compared with the traditional magnetic property detection method, the model for detecting the magnetic properties of haematite based on spectral data and the IPSO-ELM network has the advantages of high precision, fast detection speed, and low costs. It provides a new method for the detection of the magnetic properties of haematite.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflict of interest.

Acknowledgments

This research work is supported by National Natural Science Foundation of China (Grant Nos. 41371437, 61203214, 61473072, 61773105, 61374147, and 61733003), Fundamental Research Funds for the Central Universities (Grant Nos. N150402001 and N160404008), National Key Research and Development Plan (2016YFC0801602), and National Twelfth Five-Year Plan for Science and Technology Support (2015BAB15B01) China.

References

  1. X. Jing-hai and J. Wen-li, “Study and discussion on beneficiation method of mengjiagou hematite,” Metal Mine, vol. 9, pp. 37–41, 2006. View at Google Scholar
  2. T. Cheng-hua, “Application of portable magnetic absorption meter in determination of magnetic rate of magnetite,” Modern Mining, vol. 11, pp. 229-230, 2016. View at Google Scholar
  3. M. Ya-chun, X. Dong, and C. Jin-fu, “Magnesium ore grade classification based on near infrared spectroscopy and ELM algorithm,” Spectroscopy and Spectral Analysis, vol. 37, no. 1, pp. 89–94, 2017. View at Google Scholar
  4. A.-P. Li, Z.-Y. Li, J.-P. Jia, and X.-M. Qin, “Chemical comparison of coat and kernel of mung bean by nuclear magnetic resonance-based metabolic fingerprinting approach,” Spectroscopy Letters, vol. 49, no. 3, pp. 217–224, 2016. View at Publisher · View at Google Scholar · View at Scopus
  5. J. R. Becker, P. J. Skrodzki, P. K. Diwakar, and A. Hassanein, “Double-pulse neodymium YAG/carbon dioxide laser-induced breakdown spectroscopy for excitation of bulk and trace analytes,” Spectroscopy Letters, vol. 49, no. 4, pp. 276–284, 2016. View at Publisher · View at Google Scholar · View at Scopus
  6. Y. Yang, S. J. Zhang, and Y. He, “Dynamic detection of fresh jujube based on ELM and visible/near infrared spectra,” Spectroscopy and Spectral Analysis, vol. 35, no. 7, pp. 1870–1874, 2015. View at Google Scholar
  7. S. Wold, K. Esbensen, and P. Geladi, “Principal component analysis,” Chemometrics and Intelligent Laboratory Systems, vol. 2, no. 1-3, pp. 37–52, 1987. View at Publisher · View at Google Scholar · View at Scopus
  8. G. B. Huang, Q. Y. Zhu, and C. K. Siew, “Extreme learning machine: a new learning scheme of feedforward neural networks,” in 2004 IEEE International Joint Conference on Neural Networks, pp. 985–990, Budapest, Hungary, 2004. View at Publisher · View at Google Scholar · View at Scopus
  9. W. Tingxin, C. Xiaoyu, S. Liangshan, D. Rong, and W. Peng, “Prediction of parameter-optimized GA-ELM model for blasting blasting in surface coal mines,” Journal of China Coal Society, vol. 42, no. 3, pp. 630–638, 2017. View at Google Scholar
  10. M. Lin, F. Luo, S. Caihong, and X. Yuge, “A new hybrid intelligent extreme learning machine,” Control and Decision, vol. 6, pp. 1078–1084, 2015. View at Google Scholar
  11. Y. Mao, D. Xiao, J. Cheng et al., “Multigrades classification model of magnesite ore based on SAE and ELM,” Journal of Sensors, vol. 2017, Article ID 9846181, 9 pages, 2017. View at Publisher · View at Google Scholar · View at Scopus
  12. Q. Y. Zhu, A. K. Qin, P. N. Suganthan, and G. B. Huang, “Evolutionary extreme learning machine,” Pattern Recognition, vol. 38, no. 10, pp. 1759–1763, 2005. View at Publisher · View at Google Scholar · View at Scopus
  13. G. Zhao, Z. Shen, C. Miao, and Z. Man, “On improving the conditioning of extreme learning machine: a linear case.,” in International Conference on Information, Communications and Signal Processing, pp. 234–238, Maucu, China, 2009.
  14. B. Y. Qu, B. F. Lang, J. J. Liang, A. K. Qin, and O. D. Crisalle, “Two-hidden-layer extreme learning machine for regression and classification,” Neurocomputing, vol. 175, pp. 826–834, 2016. View at Publisher · View at Google Scholar · View at Scopus
  15. D. Xiao, B. Li, and Y. Mao, “A multiple hidden layers extreme learning machine method and its application,” Mathematical Problems in Engineering, vol. 2017, Article ID 4670187, 10 pages, 2017. View at Publisher · View at Google Scholar · View at Scopus
  16. D. Xiao, B. Li, and S. Zhang, “An online sequential multiple hidden layers extreme learning machine method with forgetting mechanism,” Chemometrics and Intelligent Laboratory Systems, vol. 176, no. 5, pp. 126–133, 2018. View at Publisher · View at Google Scholar · View at Scopus
  17. J. Kennedy, “Particle swarn optimization,” in Proceedings of IEEE International Conference on Neural Networks, pp. 1942–1948, 1995. View at Publisher · View at Google Scholar
  18. Y. Shi and R. Eberhart, “Modified particle swarm optimizer,” in Proceedings of IEEE ICEC Conference, pp. 69–73, Anchorage, 1998.
  19. Y. Zhang, D.-W. Gong, and W.-Q. Zhang, “A simplex method based improved particle swarm optimization and analysis on its global convergence,” Acta Automatica Sinica, vol. 35, no. 3, pp. 289–298, 2009. View at Publisher · View at Google Scholar · View at Scopus
  20. F. Pan, Q. Zhou, W.-X. Li, and Q. Gao, “Analysis of standard particle swarm optimization algorithm based on Markov chain,” Acta Automatica Sinica, vol. 39, no. 4, pp. 381–389, 2013. View at Publisher · View at Google Scholar · View at Scopus
  21. Z.-H. LIU, S.-W. ZHOU, K. LIU, and J. ZHANG, “Permanent magnet synchronous motor multiple parameter identification and temperature monitoring based on binary-modal adaptive wavelet particle,” Acta Automatica Sinica, vol. 39, no. 12, pp. 2121–2130, 2013. View at Publisher · View at Google Scholar · View at Scopus
  22. D. W. Gong, Y. Zhang, and C. L. Qi, “Localising odour source using multi-robot and anemotaxis-based particle swarm optimisation,” IET Control Theory & Applications, vol. 6, no. 11, p. 1661, 2012. View at Publisher · View at Google Scholar · View at Scopus
  23. Z. Yong, D. Gong, and J. Cheng, “Multi-objective particle swarm optimization approach for cost-based feature selection in classification,” IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 14, no. 1, pp. 64–75, 2017. View at Google Scholar
  24. Y. Xu and Y. Shu, “Evolutionary extreme learning machine–based on particle swarm optimization,” in International Conference on Advances in Neural Networks, pp. 644–652, Springer-Verlag, 2006. View at Publisher · View at Google Scholar · View at Scopus
  25. F. Han, H. F. Yao, and Q. H. Ling, “An improved evolutionary extreme learning machine based on particle swarm optimization,” Neurocomputing, vol. 116, pp. 87–93, 2013. View at Publisher · View at Google Scholar · View at Scopus
  26. L. Wanhua, C. Yuzhong, G. Kun, G. Songrong, and L. Zhanghui, “Parallel extreme learning machine based on improved particle swarm optimization,” Pattern Recognition and Artificial Intelligence, vol. 29, no. 9, pp. 840–849, 2016. View at Google Scholar
  27. P. L. Bartlett, “The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network,” IEEE Transactions on Information Theory, vol. 44, no. 2, pp. 525–536, 1998. View at Publisher · View at Google Scholar · View at Scopus