Wireless Communications and Mobile Computing

Wireless Communications and Mobile Computing / 2021 / Article
Special Issue

Intelligent Data Management Techniques in Multi-Homing Big Data Networks

View this Special Issue

Research Article | Open Access

Volume 2021 |Article ID 5589872 | https://doi.org/10.1155/2021/5589872

Xiaochen Lai, Jinchong Zhu, Liyong Zhang, Zheng Zhang, Wei Lu, "Attribute-Associated Neuron Modeling and Missing Value Imputation for Incomplete Data", Wireless Communications and Mobile Computing, vol. 2021, Article ID 5589872, 11 pages, 2021. https://doi.org/10.1155/2021/5589872

Attribute-Associated Neuron Modeling and Missing Value Imputation for Incomplete Data

Academic Editor: Nawab Muhammad Faseeh Qureshi
Received11 Jan 2021
Revised09 Mar 2021
Accepted09 Apr 2021
Published29 Apr 2021

Abstract

The imputation of missing values is an important research content in incomplete data analysis. Based on the auto associative neural network (AANN), this paper conducts regression modeling for incomplete data and imputes missing values. Since the AANN can estimate missing values in multiple missingness patterns efficiently, we introduce incomplete records into the modeling process and propose an attribute cross fitting model (ACFM) based on AANN. ACFM reconstructs the path of data transmission between output and input neurons and optimizes the model parameters by training errors of existing data, thereby improving its own ability to fit relations between attributes of incomplete data. Besides, for the problem of incomplete model input, this paper proposes a model training scheme, which sets missing values as variables and makes missing value variables update with model parameters iteratively. The method of local learning and global approximation increases the precision of model fitting and the imputation accuracy of missing values. Finally, experiments based on several datasets verify the effectiveness of the proposed method.

1. Introduction

The interference of various factors in process of data collection, transmission and storage, etc. may cause data loss in different degrees. The incompleteness of data that leads to most of computational intelligence technologies cannot be applied directly [1]. In the cases where incomplete records cannot be simply deleted, an effective method is needed to impute missing values.

At present, researchers have proposed a variety of imputation methods. Mean imputation method imputes corresponding missing values with mean values of existing attributes [2]. The hot deck method finds the record most similar to the incomplete record in database and then imputes data with values of this record [3]. The -nearest neighbors (KNN) imputation method takes the weighted average of records closest to the incomplete record to impute missing values [4]. Additionally, model-based methods are usually an effective way to improve the accuracy of imputation. For example, the expectation-maximization (EM) method alternately performs the expectation step and the maximization step and iteratively updates model parameters and missing values until convergence [5]. The multiple imputation method obtains values through one or more models and comprehensively processes the results to impute missing values [6]. The imputation method based on linear model imputes missing values by modeling the linear relation between attributes [7]. It assumes that there is a linear correlation of the data, but the relation is complex and unknown in real data and often reflects nonlinear features.

The neural network is flexible in construction. In theory, a neural network with nonlinear activation function can approximate complex nonlinear relations [8]. The imputation model based on neural networks can mine complex association relations within attributes of incomplete data. The imputation method based on the neural network usually uses complete records to train the network, then inputs prefilling incomplete records into the network and uses the output of network to impute missing values [9]. Sharpe and Solly [10] constructed a multilayer perceptron (MLP) for each missingness pattern, which is used to fit the regression relation between missing attributes and existing attributes. However, the number of constructed models is large, and the training is more time-consuming in the case of multiple missingness patterns. Ankaiah and Ravi [11] proposed an improved MLP imputation method, which takes each missing attribute as output and the remaining attributes as input to construct a network of single objective predictive. The number of models constructed by this method is equal to the number of missing attributes. Although the MLP imputation model can fit the regression relation between data attributes, it comes at the expense of model training time.

The auto associative neural network (AANN) is a type of network with the same number of nodes in the output layer and input layer. It is only necessary to build one model to impute incomplete data in all missingness patterns [12]. Marwala et al. [13] proposed an imputation method combining AANN and genetic algorithm (GA) and then applied it to two real datasets [14, 15]. This method takes the cost function of AANN as the fitness function of the genetic algorithm and uses the genetic algorithm to impute missing values. Based on the framework proposed by Marwala, Nelwamondo et al. adopted principal component analysis to select a reasonable number of nodes in the hidden layer [16] and reduce the dimension of data [17]. Ssali and Marwala [18] used the interval of continuous attribute divided by decision tree as data boundary, which further improved the imputation accuracy. In addition to the combination of AANN and GA, Ravi and Krishna [19] proposed four improved imputation models based on AANN, which are general regression auto associative neural network (GRAANN), particle swarm optimization based auto associative neural network (PSOAANN), auto associative wavelet neural network (AAWNN), and radial basis function auto associative neural network (RBFAANN). Among these models, GRAANN performs better than MLP and other three models in most datasets, and only needs one iteration to impute missing values. Gautam and Ravi proposed two imputation models based on AANN, which are auto associative extreme learning machine [20] and counter propagation auto associative neural network [21]. The experimental results show that the combination of local learning and global approximation can get better imputation results.

The above method only takes complete records to train the model, which avoids the problem of missing values during training. However, missing values in incomplete records will lead to incomplete model input in the imputation stage. Since the MLP imputation method constructs a specific model for each missingness pattern by taking incomplete attributes as output and complete attributes as input, it can directly input each incomplete record into the subnet of the corresponding missingness pattern. However, the AANN imputation method usually needs a prefilling method to deal with missing values during imputation. For instance, Ravi and Krishna [19] used averages to prefill missing values. Nishanth and Ravi [22] adopted -means and -medoids methods to prefill missing values. Gautam and Ravi [21] used the nearest neighbor method based on grey distance to prefill missing values.

The quantity of complete records is small when the missing rate in dataset is high. If only complete records are used to train the network, a large amount of information in incomplete records will be lost, and fewer records sometimes make the model unbuildable. Therefore, Silva-Ramírez et al. [23] prefilled missing values with a fixed value and then trained the network by all records. García-Laencina et al. [2428] proposed a multitask network that uses zero to initialize missing values and allows incomplete records to participate in model training. Although the method of prefilling incomplete records with fixed values can make them participate in model training, the prefilling values have an estimation error. If the model is trained directly with prefilling data, the accuracy of the final model will be affected by the estimation error. In addition, Yoon et al. [29] proposed an imputation method base on Generative Adversarial Nets to generate data with generator. The network architecture can also try to use the inception architecture [30] in edge computing [31].

As mentioned above, the imputation method based on AANN can improve the training efficiency compared with MLP while solving multiple missingness patterns. Consequently, this paper conducts regression modeling for the attributes of incomplete data based on AANN architecture. By redesigning the data transmission structure of AANN, the representation of regression relations between data attributes is enhanced. Moreover, aiming at the problem of incomplete model input, this paper proposes a model training scheme that takes missing values as variables and makes the missing value variables update iteratively along with model parameters during model training. The improved model and training scheme make full use of the existing data in incomplete dataset and reduce the estimation error of missing value variables gradually during model training and increases the accuracy of imputation through local learning and global approximation.

The rest of this paper is organized as follows. Section 2 introduces MLP and AANN imputation models. Section 3 proposes ACFM based on AANN and a model training scheme named UMVDT. Section 4 analyses the imputation performance of ACFM and UMVDT. And the full text is summarized in Section 5.

2. MLP and AANN Imputation Models

MLP is a feed forward artificial neural network composed of input layer, output layer, and several hidden layers. When applying the MLP method to impute missing values, an MLP imputation network needs to be constructed for each missingness pattern. Figure 1 is an incomplete dataset with several missingness patterns, different positions and the number of missing values in the sample, and different deletion modes. For an incomplete data set that is missing at random, the higher the missing rate, the more missing patterns. And its imputation networks are shown in Figure 2, where represents the -th record, represents the network output for the -th record, and represents the dimension of attribute. If represents indices of missing attributes in the -th missingness pattern, the cost function of the model is where represents the complete records, represents the nonlinear mapping of the model, and represents the weight of the model.

AANN requires that the number of nodes in the output layer is equal to that in the input layer. In order to prevent model overfitting, the number of nodes in the hidden layer is usually set to be less than that in the input layer. As shown in Figure 3, the imputation method based on AANN can

fill incomplete data under all missingness patterns through one structure. Generally, the model is trained by complete record subset, and the incomplete record subset is reconstructed after prefilled to impute the corresponding missing values. The cost function can be expressed as

It can be seen that each output value of AANN model is calculated by all input values. The output value is easier to learn the input value in the same position with model training, thus the quality of imputation values depends on a degree of the quality of pre-filling values in imputation stage. The output value of the MLP model is calculated by a regression network; so, AANN lacks clear regression relations to guide the model training and impute the missing value compared with the MLP model.

3. Proposed Architecture

3.1. Attribute CrossFitting Model

The AANN imputation model implements the imputation of multiple missingness patterns through one architecture, but it does not establish a clear regression relation between data attributes. In this paper, the regression relations between each attribute and rest attributes in incomplete dataset are expressed on one architecture by redesigning the cost function of the model where represents an incomplete dataset. It can be seen from equation (3) that the -th output value of the model is calculated from other input values except the -th input value, which helps to establish a regression relation between each output value and remaining input values. Moreover, the output of the model is no longer dependent on the corresponding input value; thus, the effect of prefilling values is weakened during the imputation stage. In order to minimize the cost function, the network needs to fully learn the correlation between each output neuron and noncorresponding input neurons. Therefore, the cost function can effectively enhance the ability of mining internal association of attributes.

If the neural network is trained by incomplete records, the missing values need to be prefilled. However, there is an estimation error in prefilled values compared with original data. The model should limit the training error between prefilled data and its predicted data to optimize model parameters. This paper defines this error as missing value error. Hence, when training the network with an incomplete dataset, the cost function that the model needs to be optimized should be where is the set of indexes for missing values in record , and indicates that the missing value error is no longer used to optimize model parameters. The model constructed based on this cost function can fit regression relation between data attributes by one architecture, which is called attribute cross fitting model (ACFM) in this paper.

The data transmission process of output neurons of ACFM is shown in Figure 4 [32]. There is an incomplete record with two missing values and that input into ACFM. Because ACFM does not use missing value error to optimize model parameters, the output values and will not be calculated. The output value of ACFM is calculated by input values except . At the same time, the calculation of output values to has similar processes. It can be seen that the calculation amount of ACFM is the same as that of AANN. In this article, the input data has been expanded by dimensionality times when it is implemented by programming. Then, perform forward calculation in a fully connected manner. Finally, the output is sliced, and the required value is taken out. Therefore, the parameter of ACFM in the experiment is the parameter of AANN multiplied by the number of attributes of the data.

3.2. Updating Missing Values during Training

The prefilling missing values solve the problem of the incomplete model input, but the quality of prefilling values has an important impact on the quality of trained model when prefilling incomplete records are used to train the model directly. The prefilling values have an initial estimation error, which will reduce the accuracy of the model. Therefore, this paper proposes a model training scheme by treating missing values as variables and iteratively updating missing values during training process (UMVDT). UMVDT dynamically adjusts the values of missing and gradually reducing the estimation error of missing values, thus the missing values will meet the fitting relationship determined by existing data. As shown in Figure 5, UMVDT training scheme initializes the missing value variables in incomplete records and inputs incomplete records into ACFM for calculating the error between output and input values; then, it updates the missing value variables and the network parameters through the back propagation algorithm. The above process is repeated for all records until the model convergence. In the model based on UMVDT, the missing value variable is optimized by the regression structure within the incomplete data. The accuracy of the model will be improved with the deepening of the training, and the missing value predicted by the model will also be more accurate.

If the neurons in the input layer of ACFM are the first layer, and the output layer is , and represent the weights and thresholds from layers to (). And each output neuron of the model is directly output after linear summation; so, it can be expressed as where represents the linear summation of the -th neuron in the layer , represents the number of neurons in layer , and represents the -th output in layer . Corresponding to each neuron in the output layer, the output of -th neuron in each hidden layer can be expressed as where is the activation function. According to equation (4), the error between -th record and output of the network is

If we define the intermediate variables as where represents that the input value corresponding to the -th predicted value is available, represents that the input value corresponding to the -th predicted value is missing, and thus the partial derivative is set to zero, and the corresponding model parameters are not optimized. When , is and it can be concluded that the partial derivative of error for the network parameter is

Similarly, the partial derivative of error for the network parameter is

Assuming that the learning rate is , and when the gradient descent method is used to optimize the model, the updating rule of the model parameters is

Missing value variables are updated with the model parameters during model training. It can be deduced from equation (9) that the partial derivative of error for the missing value variable () is and the updating rule of missing value variable is

In summary, the imputation algorithm based on the ACFM model and UMVDT training scheme is described as follows:

INPUT: complete dataset , missing rate, ACFM, learning rate , maximum rounds T.
OUTPUT: the imputation error of at specified missing rate.
Generate an incomplete dataset according to specified missing rate.
Initialize missing values as variables, model weights, and thresholds.
Set t =0, precision =1.
while t<T and precision<0.001 do.
 for x in :
  Input x into model and get output y.
  Calculate the error for updating the model parameters and missing value variables respectively.
 end for
 Reconstruct model output and predict missing values.
 Calculate the imputation error and precision.
end while
Output the imputation error.

4. Experiment

4.1. Datasets

In order to verify the imputation performance of proposed method, ten complete datasets obtained from the UCI database are used in our experiment, and the description of datasets is shown in Table 1. Among them, Stock is often used for clustering tasks, Concrete is often used for regression tasks, and the remaining data sets can be used for classification tasks. Most of these data are numeric, and some of them are nonnumeric in the ID column, which was deleted in the experiment. Additional information can refer to data sets UCI official website. For the sake of forming incomplete datasets, partial data are deleted randomly according to specified deletion rates which are set as 5%, 10%, 15%, 20%, 25%, and 30% and ensure that each incomplete record has at least one attribute value, which can be used for normal training.


DatasetsRecordsAttributesDatasetsRecordsAttributes

Blood7484Iris1504
Buddymove2496Seeds2107
Ecoli3367Stock25212
Glass21410Wine17813
Concrete10309Abalone41777

4.2. Experimental Design

Six imputation methods based on MLP, AANN, and ACFM are realized. The method based on AANN and ACFM realizes the training by traditional training scheme and UMVDT training scheme. Traditional training scheme only uses the mean value to prefill missing value, and does not update missing values. To verify the effect of missing value error on imputation accuracy of the model, this paper uses equation (3) with missing value error and equation (4) without missing value error as the cost function, respectively. The specific methods are described as follows: (1)The imputation method based on the MLP model and traditional training scheme (MLP-I): taking missing attributes as output and other attributes as input, multiple networks of single objective predictive are established based on MLP. These models are trained with complete records during the training stage. In the imputation stage, the incomplete records are prefilling with the mean method, and missing values are imputed with the reconstructed model output(2)The imputation method based on the AANN model and traditional training scheme (AANN-I): the imputation process is same as MLP-I, but the architecture is AANN(3)The imputation method based on the ACFM model where missing value error is used to optimize model parameters (ACFM-MEI): equation (3) is used as the cost function of ACFM. The incomplete records are prefilling with the mean method, and then all records are used to train the model. Finally, the reconstructed model output is used to impute missing values(4)The imputation method based on the ACFM model where missing value error is not used to optimize model parameters (ACFM-I): equation (4) is used as the cost function of ACFM, and the process is same as ACFM-MEI(5)The imputation method based on the AANN model and UMVDT training scheme (AANN-UMVDT): the mean value is used to initialize missing value variables. After that, the method uses all data to train the model, dynamically updates missing value variables during model training, and reconstructs the model output to impute missing values(6)The imputation method based on the ACFM model and UMVDT training scheme (ACFM-UMVDT): the process is same as AANN-UMVDT, but the architecture is ACFM

All models are optimized based on the gradient descent method with momentum. The learning rate is set as 0.2, and momentum is set as 0.9. All methods were repeated ten times at each missing rate, and the average value of ten imputation errors was taken as experimental results. Imputation error is evaluated by mean absolute percentage error (MAPE): where represents incomplete records subset, and represents the number of missing values.

4.3. Experimental Results

The experimental results are shown in Tables 2 and 3.


DatasetsMethodsMissing rates
5%10%15%20%25%30%

BloodMLP-I1.1131.1220.8720.680.8610.985
AANN-I0.9831.0871.2741.1141.1881.212
AANN-UMVDT0.9410.9681.0050.9741.0261.124
ACFM-MEI0.5130.5750.6830.7280.7870.854
ACFM-I0.4880.5370.620.6710.7280.764
ACFM-UMVDT0.4490.510.5990.6330.7540.8
BuddymoveMLP-I0.240.1510.1660.1990.2130.237
AANN-I0.2020.2340.2430.2720.2610.292
AANN-UMVDT0.140.1470.1590.1630.1670.177
ACFM-MEI0.1220.1490.1720.1990.20.222
ACFM-I0.1220.1420.1640.1840.1860.197
ACFM-UMVDT0.1050.1150.1310.150.150.176
EcoliMLP-I0.4720.2310.2330.2360.2740.285
AANN-I0.1920.2730.2810.2590.2750.304
AANN-UMVDT0.1850.2470.2610.2380.2570.276
ACFM-MEI0.1590.2260.2530.2370.2590.283
ACFM-I0.1570.220.2410.2260.2470.269
ACFM-UMVDT0.1550.2120.2480.2320.2410.269
GlassMLP-I0.2870.3510.2980.3430.4230.45
AANN-I0.3760.4050.4070.3630.4170.439
AANN-UMVDT0.4070.4290.4030.3560.4140.419
ACFM-MEI0.3110.3420.3440.3030.3570.354
ACFM-I0.290.3460.3380.3140.350.35
ACFM-UMVDT0.2860.3270.3460.3140.3530.345
IrisMLP-I0.1570.150.2370.2340.2720.335
AANN-I0.2980.3580.3760.3860.4010.455
AANN-UMVDT0.150.1580.190.1880.1890.234
ACFM-MEI0.170.1860.250.2790.2970.367
ACFM-I0.1530.1390.2190.2170.2370.272
ACFM-UMVDT0.1390.1280.1670.1730.1860.234


DatasetsMethodsMissing rates
5%10%15%20%25%30%

SeedsMLP-I0.0710.0930.1040.1140.0960.151
AANN-I0.0830.0960.0950.1090.0970.122
AANN-UMVDT0.0670.0770.0720.0840.0760.085
ACFM-MEI0.070.080.080.0970.0880.099
ACFM-I0.0670.0770.0760.090.0830.09
ACFM-UMVDT0.0620.0680.0670.0810.0760.088
StockMLP-I0.1090.1450.1710.2270.2790.341
AANN-I0.1810.2030.220.2360.2680.32
AANN-UMVDT0.1770.1850.1850.1940.190.201
ACFM-MEI0.1230.150.1590.1750.1760.186
ACFM-I0.1130.1440.1540.1680.1720.176
ACFM-UMVDT0.1020.1260.140.170.1810.186
WineMLP-I0.1830.2150.2820.3240.3720.488
AANN-I0.2110.250.2460.2940.3520.392
AANN-UMVDT0.1990.2140.2050.2080.2150.233
ACFM-MEI0.1720.1970.1980.2040.2110.225
ACFM-I0.1760.1850.1890.1960.2010.21
ACFM-UMVDT0.1750.1860.1940.1990.2060.215
ConcreteMLP-I0.4520.4020.4010.450.4810.57
AANN-I0.4650.4050.4780.5190.5240.569
AANN-UMVDT0.5010.4290.4660.4980.4930.511
ACFM-MEI0.30.2880.320.3720.3720.403
ACFM-I0.310.2980.3310.3830.3610.386
ACFM-UMVDT0.3360.3420.40.4360.4310.504
AbaloneMLP-I0.1550.2190.3520.4990.4510.631
AANN-I0.5670.5470.6050.6330.6320.535
AANN-UMVDT0.1330.1140.1540.1890.1910.337
ACFM-MEI0.1840.2120.2480.3360.3810.467
ACFM-I0.1450.1630.1960.2230.2430.246
ACFM-UMVDT0.1190.1370.1250.1680.1710.193

4.4. Experimental Discussion

The impact of architecture on imputation results: by observing the MAPE values of ACFM-I, AANN-I, and MLP-I in Tables 2 and 3, we can see that the results of ACFM-I are slightly worse than those of MLP-I in four cases, which is Ecoli at the missing rate of 15%, Glass at the missing rates of 5% and 15%, and Stock at the missing rate of 5%. In addition to the above, all the results of ACFM-I are better than MLP-I and AANN-I. Besides, there are forty-three results of MLP-I superior to the AANN-I among sixty imputation results. This result shows that MLP can more accurately characterize the regression relation within dataset than AANN, thereby obtaining higher imputation accuracy. ACFM increases the ability to fit regression relations by modifying the cost function compared to AANN. Meanwhile, compared with MLP, ACFM fits multiple regression relations through one architecture, which increases the generalization ability of ACFM on the premise of improving the imputation accuracy.

The impact of missing value error on imputation results: it can be observed from Tables 2 and 3 that ACFM-I performs slightly worse than ACFM-MEI under the missing rates of 10% and 20% in the Glass dataset, 5% in the Wine dataset, and four kinds of missing rates on the Concrete dataset. In addition, the imputation result of ACFM-I is better than ACFM-MEI in imputation results. It shows that the optimization of model parameters by missing value error affects the accuracy of modeling and thus leads to the poor performance of imputation results.

Taking the Iris dataset as an example, the imputation results of ACFM-MEI and ACFM-I at missing rates of 5%-30% are shown in Figure 6. With the increase of missing rate, the gap between imputation results of ACFM-MEI and ACFM-I becomes larger. If we continue to use the missing value error to optimize the model parameters when there are more and more missing values in the dataset, the deviation of model will also increase. Therefore, in this paper, equation (4) is used as the cost function of ACFM in this paper; that is, only the errors of existing data are used to optimize the model parameters, which have certain reasonableness and correctness.

Comparison between UMVDT and traditional training scheme: except for the results of Glass and Concrete datasets at missing rates of 5% and 10%, the results of AANN-UMVDT are better than those of AANN-I. Among the 60 imputation results, the results of ACFM-UMVDT are better than those of ACFM-I accounted for 66.7%, wherein the Concrete data set of data values vary greatly. There are many zero values and large values, and many samples have the same value in attributes. UMVDT will change the missing values during the training process, which may cause the imputation results of many samples with the same value to be unstable. The above results show that UMVDT training scheme has higher imputation performance than traditional one. UMVDT training scheme makes full use of the whole existing data in incomplete records and takes missing values as variables to make them gradually match the fitting relationship. The missing value variables and model parameters are updated alternately; so, the imputation effect can be improved significantly.

When the missing rate of the Iris dataset is 15%, the imputation of ACFM-I and ACFM-UMVDT and the variation of the missing value variables (MVV) of ACFM-UMVDT in each round are shown in Figure 7. It can be found that the missing values tend to be stable soon after a short period of fluctuation, and the imputation results of ACFM-UMVDT also tend to be stable with the increase of iteration rounds. The missing value is updated iteratively. Not only the MAPE values calculated by missing value variables are more accurate than those of original model but also the imputation accuracy can be further improved by the model which is trained by the data updated iteratively.

The convergence of the proposed method: we take the Iris dataset as an example to verify the convergence of the proposed method. Figure 8 shows the fitting error of ACFM-UMVDT at various missing rates. It can be observed that all curves of fitting errors decrease in different degrees at beginning and become stabilized gradually. It is because the UMVDT training scheme constantly updates missing value variables and changes missing values in incomplete records. Missing value variables and model parameters converge continuously in the alternate updating process. The curves in Figure 8 show that the imputation method proposed in this paper has ideal convergence.

5. Conclusions

To solve the problem of imputation of missing values, this paper conducts attribute association modeling for incomplete data based on AANN. By modifying the cost function of AANN, this paper represents the regression relation between each attribute and the rest attributes of incomplete data on one architecture and redesign ACFM for enhanced to fit the association relation between data attributes. And we only use the training errors of existing data to optimize the model to reduce the inaccurate error between missing values and its predicted values in incomplete data to optimize the model. In addition, for the problem of incomplete model input, this paper proposes UMVDT training scheme, which sets missing values as variables and updates the model parameters and missing value variables alternately. UMVDT gradually optimizes the missing value variables through the regression structure of the model and further reduces the negative impact of the uncertainty of missing values during model input on the model. Experimental results show that the ACFM model can obtain more accurate imputation results compared with MLP and AANN models, and UMVDT further improves the accuracy of imputation on AANN and ACFM models by gradually iterating the missing value variables compared with traditional training scheme.

Data Availability

All datasets in this study can be downloaded from http://archive.ics.uci.edu/ml/datasets.php. And all experimental results are included in this published article.

Conflicts of Interest

We declare that there is no conflict of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the Natural Science Foundation of China (62076050, 62073056) and National Key R&D Program of China (2018YFB1700200).

References

  1. F. V. Nelwamondo, D. Golding, and T. Marwala, “A dynamic programming approach to missing data estimation using neural networks,” Information Sciences, vol. 237, pp. 49–58, 2013. View at: Publisher Site | Google Scholar
  2. S. M. C. M. Nor, S. M. Shaharudin, S. Ismail, N. H. Zainuddin, and M. L. Tan, “A comparative study of different imputation methods for daily rainfall data in east-coast peninsular Malaysia,” Bulletin of Electrical Engineering and Informatics, vol. 9, no. 2, pp. 635–643, 2020. View at: Publisher Site | Google Scholar
  3. V. R. Elgin Christo, H. Khanna Nehemiah, B. Minu, and A. Kannan, “Correlation-Based Ensemble Feature Selection Using Bioinspired Algorithms and Classification Using Backpropagation Neural Network,” Computational and Mathematical Methods in Medicine, vol. 2019, Article ID 7398307, 17 pages, 2019. View at: Publisher Site | Google Scholar
  4. I. B. Aydilek and A. Arslan, “A novel hybrid approach to estimating missing values in databases using k-nearest neighbors and neural networks,” International Journal of Innovative Computing, Information and Control, vol. 7, no. 8, pp. 4705–4717, 2012. View at: Google Scholar
  5. Z. Ghahramani and M. Jordan, “Supervised learning from incomplete data via an EM approach,” in Advances in Neural Information Processing Systems, J. Cowan, G. Tesauro, and J. Alspector, Eds., vol. 6, pp. 120–127, Morgan-Kaufmann, 1994. View at: Google Scholar
  6. R. Suphanchaimat, S. Limwattananon, and W. Putthasri, “Multiple imputation technique: handling missing data in real world health care research,” Southeast Asian Journal of Tropical Medicine and Public Health, vol. 48, no. 3, pp. 694–703, 2017. View at: Google Scholar
  7. K. Yang, J. Li, and C. Wang, “Missing values estimation in microarray data with partial least squares regression,” in International Conference on Computational Science, pp. 662–669, Springer, Berlin, Heidelberg, May 2006. View at: Google Scholar
  8. A. Tealab, “Time series forecasting using artificial neural networks methodologies: a systematic review,” Future Computing and Informatics Journal, vol. 3, no. 2, pp. 334–340, 2018. View at: Publisher Site | Google Scholar
  9. I. A. Gheyas and L. S. Smith, “A neural network-based framework for the reconstruction of incomplete data sets,” Neurocomputing, vol. 73, no. 16-18, pp. 3039–3065, 2010. View at: Publisher Site | Google Scholar
  10. P. K. Sharpe and R. J. Solly, “Dealing with missing values in neural network-based diagnostic systems,” Neural Computing & Applications, vol. 3, no. 2, pp. 73–77, 1995. View at: Publisher Site | Google Scholar
  11. N. Ankaiah and V. Ravi, “A novel soft computing hybrid for data imputation,” in Proceedings of the International Conference on Data Mining (DMIN) (P. 1). The Steering Committee of The World Congress in Computer Science, Computer Engineering and Applied Computing (WorldComp), Las Vegas, USA, 2011. View at: Google Scholar
  12. L. Gondara and K. Wang, “Mida: multiple imputation using denoising autoencoders,” in Pacific-Asia conference on knowledge discovery and data mining, pp. 260–272, Springer, Cham, 2018. View at: Google Scholar
  13. M. Abdella and T. Marwala, “The use of genetic algorithms and neural networks to approximate missing data in database,” in IEEE 3rd international conference on computational cybernetics, 2005. ICCC 2005, pp. 207–212, Mauritius, April 2005. View at: Google Scholar
  14. B. L. Betechuoh, T. Marwala, and T. Tettey, “Autoencoder networks for HIV classification,” Current Science, vol. 91, no. 11, pp. 1467–1473, 2006. View at: Google Scholar
  15. T. Marwala and S. Chakraverty, “Fault classification in structures with incomplete measured data using autoassociative neural networks and genetic algorithm,” Current Science, vol. 90, no. 4, pp. 542–548, 2006. View at: Google Scholar
  16. F. J. Mistry, F. V. Nelwamondo, and T. Marwala, “Missing data estimation using principle component analysis and autoassociative neural networks,” Journal of Systemics, Cybernatics and Informatics, vol. 7, no. 3, pp. 72–79, 2009. View at: Google Scholar
  17. A. K. Mohamed, F. V. Nelwamondo, and T. Marwala, “Estimating missing data using neural network techniques, principal component analysis and genetic algorithms,” in Proceedings of the Eighteenth Annual Symposium of the Pattern Recognition Association of South Africa, Pietermaritzburg, South Africa, 2007. View at: Google Scholar
  18. G. Ssali and T. Marwala, “Estimation of missing data using computational intelligence and decision trees,” 2007, https://arxiv.org/abs/0709.1640. View at: Google Scholar
  19. V. Ravi and M. Krishna, “A new online data imputation method based on general regression auto associative neural network,” Neurocomputing, vol. 138, pp. 106–113, 2014. View at: Publisher Site | Google Scholar
  20. C. Gautam and V. Ravi, “Data imputation via evolutionary computation, clustering and a neural network,” Neurocomputing, vol. 156, pp. 134–142, 2015. View at: Publisher Site | Google Scholar
  21. C. Gautam and V. Ravi, “Counter propagation auto-associative neural network based data imputation,” Information Sciences, vol. 325, pp. 288–299, 2015. View at: Publisher Site | Google Scholar
  22. K. J. Nishanth and V. Ravi, “A computational intelligence based online data imputation method: an application for banking,” Journal of Information Processing Systems, vol. 9, no. 4, pp. 633–650, 2013. View at: Publisher Site | Google Scholar
  23. E. L. Silva-Ramírez, R. Pino-Mejías, M. López-Coello, and M. D. Cubiles-de-la-Vega, “Missing value imputation on missing completely at random data using multilayer perceptrons,” Neural Networks, vol. 24, no. 1, pp. 121–129, 2011. View at: Publisher Site | Google Scholar
  24. P. J. García-Laencina, J.-L. Sancho-Gómez, and A. R. Figueiras-Vidal, “Classifying patterns with missing values using multi-task learning perceptrons,” Expert Systems with Applications, vol. 40, no. 4, pp. 1333–1341, 2013. View at: Publisher Site | Google Scholar
  25. P. J. García-Laencina, J. L. Sancho-Gómez, and A. R. Figueiras-Vidal, “Pattern classification with missing data: a review,” Neural Computing and Applications, vol. 19, no. 2, pp. 263–282, 2010. View at: Publisher Site | Google Scholar
  26. J. M. Jerez, I. Molina, P. J. García-Laencina et al., “Missing data imputation using statistical and machine learning methods in a real breast cancer problem,” Artificial Intelligence in Medicine, vol. 50, no. 2, pp. 105–115, 2010. View at: Publisher Site | Google Scholar
  27. P. J. García-Laencina, J. Serrano, A. R. Figueiras-Vidal, and J. L. Sancho-Gómez, “Multi-task neural networks for dealing with missing inputs,” in International work-conference on the interplay between natural and artificial computation, pp. 282–291, Springer, Berlin, Heidelberg, June 2007. View at: Google Scholar
  28. P. J. García-Laencina, J. Sancho-Gomez, and A. R. Figueiras-Vidal, “Pattern classification with missing values using multitask learning,” in The 2006 IEEE international joint conference on neural network proceedings, pp. 3594–3601, Vancouver, BC, Canada, July 2006. View at: Google Scholar
  29. J. Yoon, J. Jordon, and M. Schaar, “Gain: missing data imputation using generative adversarial nets,” in International conference on machine learning, vol. 80, pp. 5689–5698, Stockholm, Sweden, July 2018, PMLR. View at: Google Scholar
  30. X. Kong, K. Wang, S. Wang et al., “Real-time mask identification for COVID-19: an edge computing-based deep learning framework,” IEEE Internet of Things Journal, 2021. View at: Publisher Site | Google Scholar
  31. X. Kong, S. Tong, H. Gao et al., “Mobile edge cooperation optimization for wearable internet of things: a network representation-based framework,” IEEE Transactions on Industrial Informatics, vol. 17, no. 7, pp. 5050–5058, 2021. View at: Publisher Site | Google Scholar
  32. J. Zhu, L. Zhang, X. Lai, and G. Zhang, “Imputation of incomplete data based on attribute cross fitting model and iterative missing value variables,” in International symposium on neural networks, pp. 167–175, Springer, Cham, December 2020. View at: Google Scholar

Copyright © 2021 Xiaochen Lai et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views233
Downloads324
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.